Dec 01 02:56:09 crc systemd[1]: Starting Kubernetes Kubelet... Dec 01 02:56:09 crc restorecon[4573]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:09 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 02:56:10 crc restorecon[4573]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 01 02:56:10 crc kubenswrapper[4880]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 02:56:10 crc kubenswrapper[4880]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 01 02:56:10 crc kubenswrapper[4880]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 02:56:10 crc kubenswrapper[4880]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 02:56:10 crc kubenswrapper[4880]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 01 02:56:10 crc kubenswrapper[4880]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.582349 4880 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589178 4880 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589209 4880 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589220 4880 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589228 4880 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589237 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589248 4880 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589259 4880 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589267 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589274 4880 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589282 4880 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589290 4880 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589298 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589306 4880 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589313 4880 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589321 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589329 4880 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589340 4880 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589351 4880 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589373 4880 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589382 4880 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589391 4880 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589399 4880 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589407 4880 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589415 4880 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589422 4880 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589430 4880 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589438 4880 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589445 4880 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589453 4880 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589461 4880 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589468 4880 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589476 4880 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589484 4880 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589492 4880 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589499 4880 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589507 4880 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589514 4880 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589522 4880 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589530 4880 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589539 4880 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589549 4880 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589560 4880 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589569 4880 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589577 4880 feature_gate.go:330] unrecognized feature gate: Example Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589585 4880 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589593 4880 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589603 4880 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589613 4880 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589623 4880 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589631 4880 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589642 4880 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589652 4880 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589661 4880 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589669 4880 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589677 4880 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589686 4880 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589694 4880 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589702 4880 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589709 4880 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589717 4880 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589724 4880 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589732 4880 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589740 4880 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589748 4880 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589756 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589764 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589773 4880 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589791 4880 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589799 4880 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589808 4880 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.589816 4880 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590324 4880 flags.go:64] FLAG: --address="0.0.0.0" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590349 4880 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590368 4880 flags.go:64] FLAG: --anonymous-auth="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590380 4880 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590391 4880 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590401 4880 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590412 4880 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590423 4880 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590433 4880 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590443 4880 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590453 4880 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590467 4880 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590477 4880 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590486 4880 flags.go:64] FLAG: --cgroup-root="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590494 4880 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590503 4880 flags.go:64] FLAG: --client-ca-file="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590512 4880 flags.go:64] FLAG: --cloud-config="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590521 4880 flags.go:64] FLAG: --cloud-provider="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590530 4880 flags.go:64] FLAG: --cluster-dns="[]" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590542 4880 flags.go:64] FLAG: --cluster-domain="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590551 4880 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590560 4880 flags.go:64] FLAG: --config-dir="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590569 4880 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590578 4880 flags.go:64] FLAG: --container-log-max-files="5" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590589 4880 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590598 4880 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590607 4880 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590617 4880 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590627 4880 flags.go:64] FLAG: --contention-profiling="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590636 4880 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590644 4880 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590654 4880 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590663 4880 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590674 4880 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590683 4880 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590692 4880 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590701 4880 flags.go:64] FLAG: --enable-load-reader="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590710 4880 flags.go:64] FLAG: --enable-server="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590719 4880 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590730 4880 flags.go:64] FLAG: --event-burst="100" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590739 4880 flags.go:64] FLAG: --event-qps="50" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590748 4880 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590757 4880 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590766 4880 flags.go:64] FLAG: --eviction-hard="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590776 4880 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590785 4880 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590793 4880 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590804 4880 flags.go:64] FLAG: --eviction-soft="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590813 4880 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590822 4880 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590831 4880 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590840 4880 flags.go:64] FLAG: --experimental-mounter-path="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590849 4880 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590858 4880 flags.go:64] FLAG: --fail-swap-on="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590867 4880 flags.go:64] FLAG: --feature-gates="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590914 4880 flags.go:64] FLAG: --file-check-frequency="20s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590923 4880 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590932 4880 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590941 4880 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590951 4880 flags.go:64] FLAG: --healthz-port="10248" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590960 4880 flags.go:64] FLAG: --help="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590969 4880 flags.go:64] FLAG: --hostname-override="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590978 4880 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590987 4880 flags.go:64] FLAG: --http-check-frequency="20s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.590997 4880 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591006 4880 flags.go:64] FLAG: --image-credential-provider-config="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591015 4880 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591024 4880 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591034 4880 flags.go:64] FLAG: --image-service-endpoint="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591043 4880 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591052 4880 flags.go:64] FLAG: --kube-api-burst="100" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591061 4880 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591071 4880 flags.go:64] FLAG: --kube-api-qps="50" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591079 4880 flags.go:64] FLAG: --kube-reserved="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591088 4880 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591097 4880 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591106 4880 flags.go:64] FLAG: --kubelet-cgroups="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591115 4880 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591124 4880 flags.go:64] FLAG: --lock-file="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591133 4880 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591142 4880 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591151 4880 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591174 4880 flags.go:64] FLAG: --log-json-split-stream="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591184 4880 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591193 4880 flags.go:64] FLAG: --log-text-split-stream="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591202 4880 flags.go:64] FLAG: --logging-format="text" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591211 4880 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591221 4880 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591230 4880 flags.go:64] FLAG: --manifest-url="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591239 4880 flags.go:64] FLAG: --manifest-url-header="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591250 4880 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591259 4880 flags.go:64] FLAG: --max-open-files="1000000" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591271 4880 flags.go:64] FLAG: --max-pods="110" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591280 4880 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591289 4880 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591299 4880 flags.go:64] FLAG: --memory-manager-policy="None" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591309 4880 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591318 4880 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591327 4880 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591336 4880 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591356 4880 flags.go:64] FLAG: --node-status-max-images="50" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591365 4880 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591373 4880 flags.go:64] FLAG: --oom-score-adj="-999" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591382 4880 flags.go:64] FLAG: --pod-cidr="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591391 4880 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591404 4880 flags.go:64] FLAG: --pod-manifest-path="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591412 4880 flags.go:64] FLAG: --pod-max-pids="-1" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591421 4880 flags.go:64] FLAG: --pods-per-core="0" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591430 4880 flags.go:64] FLAG: --port="10250" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591440 4880 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591449 4880 flags.go:64] FLAG: --provider-id="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591458 4880 flags.go:64] FLAG: --qos-reserved="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591467 4880 flags.go:64] FLAG: --read-only-port="10255" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591476 4880 flags.go:64] FLAG: --register-node="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591485 4880 flags.go:64] FLAG: --register-schedulable="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591494 4880 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591509 4880 flags.go:64] FLAG: --registry-burst="10" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591517 4880 flags.go:64] FLAG: --registry-qps="5" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591526 4880 flags.go:64] FLAG: --reserved-cpus="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591536 4880 flags.go:64] FLAG: --reserved-memory="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591547 4880 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591556 4880 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591565 4880 flags.go:64] FLAG: --rotate-certificates="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591574 4880 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591583 4880 flags.go:64] FLAG: --runonce="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591591 4880 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591601 4880 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591610 4880 flags.go:64] FLAG: --seccomp-default="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591625 4880 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591634 4880 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591643 4880 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591652 4880 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591662 4880 flags.go:64] FLAG: --storage-driver-password="root" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591670 4880 flags.go:64] FLAG: --storage-driver-secure="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591679 4880 flags.go:64] FLAG: --storage-driver-table="stats" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591688 4880 flags.go:64] FLAG: --storage-driver-user="root" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591697 4880 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591707 4880 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591716 4880 flags.go:64] FLAG: --system-cgroups="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591725 4880 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591738 4880 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591747 4880 flags.go:64] FLAG: --tls-cert-file="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591756 4880 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591769 4880 flags.go:64] FLAG: --tls-min-version="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591778 4880 flags.go:64] FLAG: --tls-private-key-file="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591786 4880 flags.go:64] FLAG: --topology-manager-policy="none" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591795 4880 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591813 4880 flags.go:64] FLAG: --topology-manager-scope="container" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591823 4880 flags.go:64] FLAG: --v="2" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591834 4880 flags.go:64] FLAG: --version="false" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591844 4880 flags.go:64] FLAG: --vmodule="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591855 4880 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.591864 4880 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592089 4880 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592099 4880 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592108 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592116 4880 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592123 4880 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592132 4880 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592140 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592155 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592164 4880 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592172 4880 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592182 4880 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592192 4880 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592200 4880 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592208 4880 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592216 4880 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592227 4880 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592237 4880 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592246 4880 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592254 4880 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592262 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592271 4880 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592279 4880 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592287 4880 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592294 4880 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592302 4880 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592310 4880 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592318 4880 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592326 4880 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592333 4880 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592344 4880 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592354 4880 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592363 4880 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592375 4880 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592383 4880 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592392 4880 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592400 4880 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592410 4880 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592418 4880 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592427 4880 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592439 4880 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592448 4880 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592457 4880 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592465 4880 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592473 4880 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592481 4880 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592488 4880 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592497 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592505 4880 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592512 4880 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592520 4880 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592528 4880 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592535 4880 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592543 4880 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592551 4880 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592559 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592567 4880 feature_gate.go:330] unrecognized feature gate: Example Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592575 4880 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592583 4880 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592591 4880 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592599 4880 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592609 4880 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592618 4880 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592627 4880 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592635 4880 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592644 4880 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592652 4880 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592660 4880 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592668 4880 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592676 4880 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592684 4880 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.592691 4880 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.592717 4880 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.600609 4880 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.600664 4880 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600789 4880 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600803 4880 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600814 4880 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600823 4880 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600833 4880 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600842 4880 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600851 4880 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600860 4880 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600868 4880 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600900 4880 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600908 4880 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600917 4880 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600925 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600933 4880 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600940 4880 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600948 4880 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600955 4880 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600963 4880 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600971 4880 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600979 4880 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600986 4880 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.600997 4880 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601009 4880 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601018 4880 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601028 4880 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601037 4880 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601045 4880 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601054 4880 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601065 4880 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601076 4880 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601085 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601093 4880 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601102 4880 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601111 4880 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601118 4880 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601126 4880 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601135 4880 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601144 4880 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601152 4880 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601161 4880 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601169 4880 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601178 4880 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601186 4880 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601194 4880 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601204 4880 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601212 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601220 4880 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601228 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601235 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601244 4880 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601251 4880 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601259 4880 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601267 4880 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601274 4880 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601283 4880 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601290 4880 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601298 4880 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601308 4880 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601318 4880 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601328 4880 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601337 4880 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601345 4880 feature_gate.go:330] unrecognized feature gate: Example Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601353 4880 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601361 4880 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601370 4880 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601380 4880 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601388 4880 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601397 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601407 4880 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601417 4880 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601425 4880 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.601439 4880 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601668 4880 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601680 4880 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601691 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601699 4880 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601708 4880 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601717 4880 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601725 4880 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601733 4880 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601741 4880 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601748 4880 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601756 4880 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601764 4880 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601772 4880 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601780 4880 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601788 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601795 4880 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601803 4880 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601811 4880 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601818 4880 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601826 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601834 4880 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601842 4880 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601851 4880 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601859 4880 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601867 4880 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601898 4880 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601906 4880 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601913 4880 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601925 4880 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601936 4880 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601946 4880 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601955 4880 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601963 4880 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601972 4880 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601980 4880 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.601991 4880 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602000 4880 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602009 4880 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602017 4880 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602026 4880 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602034 4880 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602044 4880 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602052 4880 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602060 4880 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602069 4880 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602077 4880 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602085 4880 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602093 4880 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602101 4880 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602117 4880 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602125 4880 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602133 4880 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602143 4880 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602150 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602158 4880 feature_gate.go:330] unrecognized feature gate: Example Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602166 4880 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602173 4880 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602181 4880 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602189 4880 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602196 4880 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602221 4880 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602232 4880 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602240 4880 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602249 4880 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602256 4880 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602265 4880 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602273 4880 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602281 4880 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602291 4880 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602300 4880 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.602310 4880 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.602323 4880 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.602580 4880 server.go:940] "Client rotation is on, will bootstrap in background" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.607034 4880 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.607165 4880 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.607984 4880 server.go:997] "Starting client certificate rotation" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.608021 4880 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.608448 4880 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-17 01:55:23.811095285 +0000 UTC Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.608593 4880 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1126h59m13.20250786s for next certificate rotation Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.615509 4880 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.617827 4880 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.629039 4880 log.go:25] "Validated CRI v1 runtime API" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.650959 4880 log.go:25] "Validated CRI v1 image API" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.652644 4880 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.656746 4880 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-01-02-51-34-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.656823 4880 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.680432 4880 manager.go:217] Machine: {Timestamp:2025-12-01 02:56:10.678643532 +0000 UTC m=+0.189897954 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:083280f8-ec38-4b4c-9ae8-83321ce8fce0 BootID:1be2706e-f8d0-4d95-b2c7-cbb60ac451ce Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:bd:3b:b7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:bd:3b:b7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:50:e9:5a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7b:e3:08 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:98:62:44 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:75:be:e2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a2:79:c0:74:f2:dd Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:0f:78:67:6c:84 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.680782 4880 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.681009 4880 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.681806 4880 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.682236 4880 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.682310 4880 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.682765 4880 topology_manager.go:138] "Creating topology manager with none policy" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.682792 4880 container_manager_linux.go:303] "Creating device plugin manager" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.683198 4880 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.683262 4880 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.683578 4880 state_mem.go:36] "Initialized new in-memory state store" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.684354 4880 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.685680 4880 kubelet.go:418] "Attempting to sync node with API server" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.685724 4880 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.685778 4880 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.685837 4880 kubelet.go:324] "Adding apiserver pod source" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.685858 4880 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.688432 4880 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.689000 4880 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.689955 4880 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.690082 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.690324 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.690515 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.690733 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.690833 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.691100 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.691236 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.691344 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.691467 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.691604 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.691734 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.691849 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.691992 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.692117 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.692230 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.692332 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.692457 4880 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.693464 4880 server.go:1280] "Started kubelet" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.693463 4880 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.694906 4880 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.695116 4880 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.696128 4880 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 01 02:56:10 crc systemd[1]: Started Kubernetes Kubelet. Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.698510 4880 server.go:460] "Adding debug handlers to kubelet server" Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.696733 4880 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187cf7f183d60485 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 02:56:10.693411973 +0000 UTC m=+0.204666395,LastTimestamp:2025-12-01 02:56:10.693411973 +0000 UTC m=+0.204666395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.704429 4880 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.704497 4880 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.705084 4880 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:09:54.378346591 +0000 UTC Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.705162 4880 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 837h13m43.673192365s for next certificate rotation Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.705424 4880 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.705463 4880 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.705667 4880 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.705956 4880 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.706438 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.706553 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.706601 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="200ms" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.711570 4880 factory.go:55] Registering systemd factory Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.711617 4880 factory.go:221] Registration of the systemd container factory successfully Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.721284 4880 factory.go:153] Registering CRI-O factory Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.721628 4880 factory.go:221] Registration of the crio container factory successfully Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.721978 4880 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.722231 4880 factory.go:103] Registering Raw factory Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.722419 4880 manager.go:1196] Started watching for new ooms in manager Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.723866 4880 manager.go:319] Starting recovery of all containers Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.725784 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.725920 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.725953 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.725977 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726004 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726026 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726050 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726075 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726104 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726132 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726182 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726208 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726232 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726260 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726287 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726311 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726342 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726365 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726438 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726511 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726590 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726650 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726678 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726701 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726726 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726750 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726846 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.726949 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.727030 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.728236 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.729231 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.729365 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.729509 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.729646 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.729714 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.729745 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.729810 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.730780 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731008 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731042 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731067 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731092 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731116 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731141 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731164 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731193 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731217 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731241 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731264 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731289 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731347 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731376 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731412 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731440 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731472 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731497 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731521 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731543 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731568 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731592 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731616 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731639 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731664 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731687 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731712 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731736 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731761 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731788 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731811 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731836 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731861 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731915 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731940 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731964 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.731987 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732070 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732104 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732128 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732154 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732177 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732203 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732228 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732252 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732279 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732304 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732330 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732354 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732377 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732405 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732431 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732459 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732562 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732587 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732612 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732635 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732659 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732683 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732706 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732732 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732759 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732784 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732808 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732830 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732856 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732929 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732958 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.732987 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734249 4880 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734306 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734339 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734367 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734397 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734429 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734459 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734489 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734522 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734551 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734574 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734594 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734612 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734630 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734646 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734665 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734683 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734698 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734716 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734732 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734749 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734764 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734781 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734798 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734817 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734834 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734850 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734865 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734903 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734921 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734937 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734955 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734975 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.734992 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735009 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735025 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735042 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735059 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735074 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735092 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735109 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735125 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735140 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735154 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735171 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735187 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735201 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735215 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735233 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735248 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735264 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735281 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735296 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735312 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735326 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735343 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735361 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735377 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735393 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735409 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735425 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735440 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735455 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735472 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735490 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735507 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735522 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735540 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735555 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735569 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735586 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735601 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735619 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735635 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735654 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735670 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735686 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735702 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735718 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735735 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735750 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735764 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735785 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735801 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735818 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735834 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735851 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735867 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735913 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735929 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735945 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735960 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735974 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.735989 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.736006 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.736022 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.736038 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.736053 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.736069 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.736083 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.736101 4880 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.736116 4880 reconstruct.go:97] "Volume reconstruction finished" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.736126 4880 reconciler.go:26] "Reconciler: start to sync state" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.757432 4880 manager.go:324] Recovery completed Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.771774 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.773789 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.773846 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.773866 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.776528 4880 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.776662 4880 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.776760 4880 state_mem.go:36] "Initialized new in-memory state store" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.781007 4880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.782608 4880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.782655 4880 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.782690 4880 kubelet.go:2335] "Starting kubelet main sync loop" Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.782746 4880 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 01 02:56:10 crc kubenswrapper[4880]: W1201 02:56:10.786558 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.786638 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.795413 4880 policy_none.go:49] "None policy: Start" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.796556 4880 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.796602 4880 state_mem.go:35] "Initializing new in-memory state store" Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.806077 4880 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.846772 4880 manager.go:334] "Starting Device Plugin manager" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.846823 4880 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.846839 4880 server.go:79] "Starting device plugin registration server" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.847299 4880 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.847319 4880 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.847735 4880 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.847818 4880 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.847827 4880 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.856173 4880 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.883537 4880 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.883611 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.884516 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.884551 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.884563 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.884700 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.884821 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.884862 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.885449 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.885479 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.885490 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.885622 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.885690 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.885716 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.885762 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.885762 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.885805 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.886444 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.886464 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.886470 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.886486 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.886492 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.886497 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.886570 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.886690 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.886729 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887144 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887166 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887176 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887300 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887377 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887408 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887674 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887701 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887917 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887937 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.887945 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.888055 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.888074 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.888283 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.888307 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.888317 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.888533 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.888552 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.888561 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.907701 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="400ms" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.941721 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.941764 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.941787 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.941812 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.941836 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.941864 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.941903 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.941939 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.941962 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.942083 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.942129 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.942155 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.942233 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.942257 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.942278 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.947831 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.948819 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.948967 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.949050 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:10 crc kubenswrapper[4880]: I1201 02:56:10.949156 4880 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 02:56:10 crc kubenswrapper[4880]: E1201 02:56:10.949991 4880 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.043807 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044203 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044314 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044478 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044622 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044778 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044938 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045075 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045202 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045333 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045485 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045620 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045780 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045933 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045047 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045174 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044255 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045305 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044587 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045456 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044447 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045592 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044906 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045740 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044120 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.045929 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.046168 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.046205 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.044749 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.046720 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.150385 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.152188 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.152216 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.152227 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.152251 4880 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 02:56:11 crc kubenswrapper[4880]: E1201 02:56:11.152500 4880 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.216345 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.230572 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.252785 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: W1201 02:56:11.257170 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-edfc5a6c82b8245e42d75eaef436b5dd53aecefdfbb58b976c2e045b2ed43bdb WatchSource:0}: Error finding container edfc5a6c82b8245e42d75eaef436b5dd53aecefdfbb58b976c2e045b2ed43bdb: Status 404 returned error can't find the container with id edfc5a6c82b8245e42d75eaef436b5dd53aecefdfbb58b976c2e045b2ed43bdb Dec 01 02:56:11 crc kubenswrapper[4880]: W1201 02:56:11.261465 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-948df3af15c33060f77bf32020b58465aa3ab4e9ce95d1a0649e98122205ae30 WatchSource:0}: Error finding container 948df3af15c33060f77bf32020b58465aa3ab4e9ce95d1a0649e98122205ae30: Status 404 returned error can't find the container with id 948df3af15c33060f77bf32020b58465aa3ab4e9ce95d1a0649e98122205ae30 Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.266607 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: W1201 02:56:11.284000 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-cb48c85154e6a27e8f2e1fd12884b7394f43e7a886044be61f0b230652c8e13d WatchSource:0}: Error finding container cb48c85154e6a27e8f2e1fd12884b7394f43e7a886044be61f0b230652c8e13d: Status 404 returned error can't find the container with id cb48c85154e6a27e8f2e1fd12884b7394f43e7a886044be61f0b230652c8e13d Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.291978 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:11 crc kubenswrapper[4880]: W1201 02:56:11.297636 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-393ce51854ec50a2f29ed2542ebc3591facc986207d36c819302c799b2cdbb46 WatchSource:0}: Error finding container 393ce51854ec50a2f29ed2542ebc3591facc986207d36c819302c799b2cdbb46: Status 404 returned error can't find the container with id 393ce51854ec50a2f29ed2542ebc3591facc986207d36c819302c799b2cdbb46 Dec 01 02:56:11 crc kubenswrapper[4880]: E1201 02:56:11.309947 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="800ms" Dec 01 02:56:11 crc kubenswrapper[4880]: W1201 02:56:11.315957 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-61fcde81caef8dc397fef25094feac538466b6afff9f4e1be6950ee23ea98e0e WatchSource:0}: Error finding container 61fcde81caef8dc397fef25094feac538466b6afff9f4e1be6950ee23ea98e0e: Status 404 returned error can't find the container with id 61fcde81caef8dc397fef25094feac538466b6afff9f4e1be6950ee23ea98e0e Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.553520 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.554616 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.554640 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.554648 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.554667 4880 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 02:56:11 crc kubenswrapper[4880]: E1201 02:56:11.554895 4880 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Dec 01 02:56:11 crc kubenswrapper[4880]: W1201 02:56:11.596282 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:11 crc kubenswrapper[4880]: E1201 02:56:11.596357 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.696550 4880 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:11 crc kubenswrapper[4880]: W1201 02:56:11.702512 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:11 crc kubenswrapper[4880]: E1201 02:56:11.702611 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Dec 01 02:56:11 crc kubenswrapper[4880]: W1201 02:56:11.779216 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:11 crc kubenswrapper[4880]: E1201 02:56:11.779302 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.786336 4880 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668" exitCode=0 Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.786397 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668"} Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.786471 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cb48c85154e6a27e8f2e1fd12884b7394f43e7a886044be61f0b230652c8e13d"} Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.786640 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.789128 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.789150 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.789160 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.789810 4880 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d" exitCode=0 Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.790104 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d"} Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.790146 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"edfc5a6c82b8245e42d75eaef436b5dd53aecefdfbb58b976c2e045b2ed43bdb"} Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.790225 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.790845 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.791133 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.791153 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.791162 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.791646 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.791677 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.791693 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.791978 4880 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9" exitCode=0 Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.792025 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9"} Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.792040 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"948df3af15c33060f77bf32020b58465aa3ab4e9ce95d1a0649e98122205ae30"} Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.792083 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.798185 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.798235 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.798248 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.799537 4880 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13" exitCode=0 Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.799615 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13"} Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.799641 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"61fcde81caef8dc397fef25094feac538466b6afff9f4e1be6950ee23ea98e0e"} Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.799727 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.800793 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.800821 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.800832 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.803190 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1"} Dec 01 02:56:11 crc kubenswrapper[4880]: I1201 02:56:11.803218 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"393ce51854ec50a2f29ed2542ebc3591facc986207d36c819302c799b2cdbb46"} Dec 01 02:56:12 crc kubenswrapper[4880]: E1201 02:56:12.110536 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="1.6s" Dec 01 02:56:12 crc kubenswrapper[4880]: E1201 02:56:12.178858 4880 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187cf7f183d60485 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 02:56:10.693411973 +0000 UTC m=+0.204666395,LastTimestamp:2025-12-01 02:56:10.693411973 +0000 UTC m=+0.204666395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 02:56:12 crc kubenswrapper[4880]: W1201 02:56:12.199756 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.39:6443: connect: connection refused Dec 01 02:56:12 crc kubenswrapper[4880]: E1201 02:56:12.199825 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.39:6443: connect: connection refused" logger="UnhandledError" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.355282 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.356237 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.356286 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.356298 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.356327 4880 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 02:56:12 crc kubenswrapper[4880]: E1201 02:56:12.356829 4880 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.39:6443: connect: connection refused" node="crc" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.808514 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.808560 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.808574 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.808666 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.810145 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.810168 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.810179 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.813205 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.813232 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.813243 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.813312 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.814598 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.814618 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.814627 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.817308 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.817331 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.817343 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.817356 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.819419 4880 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07" exitCode=0 Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.819465 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.819544 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.820325 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.820345 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.820355 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.823547 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f6e5c5e9737921fceb51bd0d4728c37dd15bb562f94038e1a08f6455c3b577d9"} Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.823683 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.824391 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.824485 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:12 crc kubenswrapper[4880]: I1201 02:56:12.824573 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.222585 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.231752 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.829557 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4"} Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.829720 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.831039 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.831096 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.831114 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.832072 4880 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b" exitCode=0 Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.832143 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b"} Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.832429 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.833065 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.833381 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.833486 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.833582 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.834399 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.834437 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.834456 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.956958 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.958461 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.958508 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.958519 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:13 crc kubenswrapper[4880]: I1201 02:56:13.958544 4880 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.787733 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.838364 4880 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.838406 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.838800 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981"} Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.838823 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879"} Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.838833 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96"} Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.838911 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.839453 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.839476 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.839483 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.840160 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.840178 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:14 crc kubenswrapper[4880]: I1201 02:56:14.840185 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.489155 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.848419 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03"} Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.848479 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c"} Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.848488 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.848492 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.849639 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.849673 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.849691 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.850487 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.850551 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:15 crc kubenswrapper[4880]: I1201 02:56:15.850573 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:16 crc kubenswrapper[4880]: I1201 02:56:16.850712 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:16 crc kubenswrapper[4880]: I1201 02:56:16.850737 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:16 crc kubenswrapper[4880]: I1201 02:56:16.852293 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:16 crc kubenswrapper[4880]: I1201 02:56:16.852350 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:16 crc kubenswrapper[4880]: I1201 02:56:16.852370 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:16 crc kubenswrapper[4880]: I1201 02:56:16.852462 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:16 crc kubenswrapper[4880]: I1201 02:56:16.852495 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:16 crc kubenswrapper[4880]: I1201 02:56:16.852512 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.105589 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.105808 4880 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.106054 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.107940 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.107982 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.108000 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.805980 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.855049 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.860203 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.861044 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:17 crc kubenswrapper[4880]: I1201 02:56:17.861084 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.863993 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.864285 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.865810 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.865915 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.865936 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.905605 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.905779 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.906975 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.907018 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:18 crc kubenswrapper[4880]: I1201 02:56:18.907038 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.524019 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.524340 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.526339 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.526427 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.526446 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.884663 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.884917 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.886455 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.886639 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:19 crc kubenswrapper[4880]: I1201 02:56:19.886801 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:20 crc kubenswrapper[4880]: I1201 02:56:20.105624 4880 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 02:56:20 crc kubenswrapper[4880]: I1201 02:56:20.105747 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 02:56:20 crc kubenswrapper[4880]: E1201 02:56:20.856376 4880 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 02:56:22 crc kubenswrapper[4880]: I1201 02:56:22.180746 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 01 02:56:22 crc kubenswrapper[4880]: I1201 02:56:22.180993 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:22 crc kubenswrapper[4880]: I1201 02:56:22.182298 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:22 crc kubenswrapper[4880]: I1201 02:56:22.182349 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:22 crc kubenswrapper[4880]: I1201 02:56:22.182367 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:22 crc kubenswrapper[4880]: I1201 02:56:22.697371 4880 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 01 02:56:23 crc kubenswrapper[4880]: W1201 02:56:23.291329 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 02:56:23 crc kubenswrapper[4880]: I1201 02:56:23.291430 4880 trace.go:236] Trace[467595404]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 02:56:13.289) (total time: 10001ms): Dec 01 02:56:23 crc kubenswrapper[4880]: Trace[467595404]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (02:56:23.291) Dec 01 02:56:23 crc kubenswrapper[4880]: Trace[467595404]: [10.001914639s] [10.001914639s] END Dec 01 02:56:23 crc kubenswrapper[4880]: E1201 02:56:23.291452 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 02:56:23 crc kubenswrapper[4880]: E1201 02:56:23.712745 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 01 02:56:23 crc kubenswrapper[4880]: W1201 02:56:23.819258 4880 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 02:56:23 crc kubenswrapper[4880]: I1201 02:56:23.819423 4880 trace.go:236] Trace[1354001099]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 02:56:13.817) (total time: 10001ms): Dec 01 02:56:23 crc kubenswrapper[4880]: Trace[1354001099]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (02:56:23.819) Dec 01 02:56:23 crc kubenswrapper[4880]: Trace[1354001099]: [10.001784088s] [10.001784088s] END Dec 01 02:56:23 crc kubenswrapper[4880]: E1201 02:56:23.819487 4880 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 02:56:23 crc kubenswrapper[4880]: E1201 02:56:23.960107 4880 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 01 02:56:24 crc kubenswrapper[4880]: I1201 02:56:24.211007 4880 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 01 02:56:24 crc kubenswrapper[4880]: I1201 02:56:24.211072 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 01 02:56:24 crc kubenswrapper[4880]: I1201 02:56:24.224988 4880 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 01 02:56:24 crc kubenswrapper[4880]: I1201 02:56:24.225071 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.160596 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.161767 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.161797 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.161807 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.161847 4880 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 02:56:27 crc kubenswrapper[4880]: E1201 02:56:27.165212 4880 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.811177 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.811422 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.812801 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.812851 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.812931 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.820864 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.881210 4880 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.881626 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.882630 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.882682 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:27 crc kubenswrapper[4880]: I1201 02:56:27.882699 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.030304 4880 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.388484 4880 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.697369 4880 apiserver.go:52] "Watching apiserver" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.700544 4880 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.700886 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.701239 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.701294 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.701350 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.701417 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:28 crc kubenswrapper[4880]: E1201 02:56:28.701543 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.701565 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.701600 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 02:56:28 crc kubenswrapper[4880]: E1201 02:56:28.701638 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:28 crc kubenswrapper[4880]: E1201 02:56:28.701935 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.703335 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.703928 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.704476 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.706010 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.706098 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.706105 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.706038 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.706086 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.706724 4880 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.707682 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.731339 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.745213 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.755843 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.765789 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.774702 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.783947 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.797377 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.806570 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.815658 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.825271 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:28 crc kubenswrapper[4880]: I1201 02:56:28.835765 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.222136 4880 trace.go:236] Trace[1383399577]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 02:56:14.480) (total time: 14741ms): Dec 01 02:56:29 crc kubenswrapper[4880]: Trace[1383399577]: ---"Objects listed" error: 14741ms (02:56:29.221) Dec 01 02:56:29 crc kubenswrapper[4880]: Trace[1383399577]: [14.741320827s] [14.741320827s] END Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.222190 4880 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.224128 4880 trace.go:236] Trace[907445827]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 02:56:15.296) (total time: 13927ms): Dec 01 02:56:29 crc kubenswrapper[4880]: Trace[907445827]: ---"Objects listed" error: 13926ms (02:56:29.223) Dec 01 02:56:29 crc kubenswrapper[4880]: Trace[907445827]: [13.927096407s] [13.927096407s] END Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.224176 4880 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.225965 4880 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.253921 4880 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51670->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.253968 4880 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51686->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.253983 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51670->192.168.126.11:17697: read: connection reset by peer" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.254058 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51686->192.168.126.11:17697: read: connection reset by peer" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.254768 4880 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.255033 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.295674 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.299697 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.300216 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326769 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326808 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326830 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326846 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326862 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326894 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326913 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326938 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326959 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326978 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.326993 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327009 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327052 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327065 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327081 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327155 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327173 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327189 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327206 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327237 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327231 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327369 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327252 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327455 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327483 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327505 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327526 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327549 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327570 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327592 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327614 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327636 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327657 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327686 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327707 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327727 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327747 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327768 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327793 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327814 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327836 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327859 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327903 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327941 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327962 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327983 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328003 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328025 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328051 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328087 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328113 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328134 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328156 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328179 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328199 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328220 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328240 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328263 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328284 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328334 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328355 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328376 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328396 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328418 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328446 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328474 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328498 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328522 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328543 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328588 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328610 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328632 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328655 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328676 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328698 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328733 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328754 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328804 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328828 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328850 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328893 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328923 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328947 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327401 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327460 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327493 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327628 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327650 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327888 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328067 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.327242 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328496 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328706 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328726 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328793 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329278 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329298 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.328836 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.328968 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:56:29.828948457 +0000 UTC m=+19.340202839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329355 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329394 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329434 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329460 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329466 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329479 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329511 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329538 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329562 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329586 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329610 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329632 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329651 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329658 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329683 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329707 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329734 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329756 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329780 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329793 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329834 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329846 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329863 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329921 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329930 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329929 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329959 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329987 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330015 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330024 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330077 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330099 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330119 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330157 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330162 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330225 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330251 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330253 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330267 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330299 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330321 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330338 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330354 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330372 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330385 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330391 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330421 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330425 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329181 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329204 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330769 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.331011 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.331301 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.331788 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.331955 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.332153 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.332456 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.332712 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.332816 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.332854 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.332859 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.329029 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.333232 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.333246 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.333273 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.333549 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.333661 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.333713 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.333885 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.333824 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334130 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334252 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334262 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334362 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334453 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334542 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334649 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334663 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334856 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.334995 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.335245 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.335786 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.335972 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.336684 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.336845 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337074 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337265 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337519 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337627 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.330431 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337689 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337718 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337740 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337770 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337794 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337814 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337839 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337860 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337896 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337892 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337918 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337942 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337937 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337968 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.337986 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.338009 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.338029 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.338048 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.338067 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.338242 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.338236 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.338479 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.338541 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.338816 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.339060 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.339151 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.339205 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.339512 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.339453 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.339622 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.340963 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.341359 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.341814 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.342264 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.342612 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.342853 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.343084 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.343567 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.344134 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.344547 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.344800 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.349499 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.349531 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.349746 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.349951 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.349998 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.350485 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.350760 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.350892 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.351088 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.339077 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.351155 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.351180 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.351242 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.351265 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.351285 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.351892 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.352136 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.351455 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.352269 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.352298 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.352332 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.352361 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353148 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353242 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353275 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353311 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353343 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353390 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353422 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353450 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353483 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353592 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353708 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353940 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353980 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.353510 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354074 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354115 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354341 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354358 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354525 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354119 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354670 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354716 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354752 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.354753 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.355608 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.357179 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.357485 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.357704 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.357778 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.358192 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.358713 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359041 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359086 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359121 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359141 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359159 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359176 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359194 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359210 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359226 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359243 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359260 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359279 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359297 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359313 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359329 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359344 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359363 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359379 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359394 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359437 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359453 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359470 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359486 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359502 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359519 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359535 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359552 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359569 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359587 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359602 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359674 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359690 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359707 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359723 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359739 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359758 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359773 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359790 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359807 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359828 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359846 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359862 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359916 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359941 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359964 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359981 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.359998 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360014 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360034 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360021 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360063 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360082 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360101 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360122 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360138 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360155 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360173 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360255 4880 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360267 4880 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360278 4880 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360287 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360298 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360307 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360316 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360325 4880 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360334 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360343 4880 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360353 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360362 4880 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360370 4880 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360378 4880 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360388 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360396 4880 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360406 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360414 4880 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360424 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360433 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360444 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360454 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360462 4880 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360473 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360482 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360493 4880 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360502 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360510 4880 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360519 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360527 4880 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360537 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360545 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360554 4880 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360562 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360570 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360580 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360590 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360600 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360609 4880 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360617 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360630 4880 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360640 4880 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360649 4880 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360658 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360666 4880 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360676 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360685 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360693 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360703 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360711 4880 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360721 4880 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360730 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360738 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360747 4880 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360756 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360764 4880 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360773 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360781 4880 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360790 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360798 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360807 4880 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360815 4880 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360823 4880 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360832 4880 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360841 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360849 4880 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360857 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360877 4880 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360886 4880 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360895 4880 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360904 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360912 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360921 4880 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360930 4880 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360939 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360947 4880 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360956 4880 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360964 4880 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360973 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360981 4880 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360992 4880 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361002 4880 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361010 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361019 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361028 4880 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361036 4880 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361045 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361053 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361062 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361070 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361078 4880 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361087 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361113 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361122 4880 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361132 4880 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361141 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361152 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361162 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361170 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361179 4880 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361188 4880 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361197 4880 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361206 4880 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361215 4880 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361223 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361232 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361241 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361252 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361260 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361268 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361277 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361285 4880 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361297 4880 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361306 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361314 4880 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361324 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361333 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361341 4880 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361350 4880 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361371 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361381 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361389 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361398 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361407 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361415 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361424 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361432 4880 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361441 4880 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361451 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360581 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360667 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.360820 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361298 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361468 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.361806 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.362501 4880 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.363275 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.363773 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.364100 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.364368 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.364585 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.365023 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.365208 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.365404 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.365512 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.365741 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.371133 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.372248 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.372500 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.373142 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.373446 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.373958 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.374184 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.374478 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.374734 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.375047 4880 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.375203 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:29.8751804 +0000 UTC m=+19.386434852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.375326 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.375410 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.375727 4880 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.375851 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:29.875838846 +0000 UTC m=+19.387093298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.376168 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.375780 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.376552 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.376753 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.376782 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.377091 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.377243 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.377431 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.377599 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.377614 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.377679 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.377905 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.377982 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.378010 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.378155 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.378303 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.378509 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.378565 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.378876 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.379008 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.379051 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.379501 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.380403 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.382080 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.382096 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.382271 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.382359 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.382439 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.382599 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.388183 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.388416 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.388561 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.388703 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.389504 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.390249 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.390323 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.390567 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.397913 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.402941 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.402954 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.403026 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.403041 4880 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.403099 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:29.903081988 +0000 UTC m=+19.414336350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.403238 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.403259 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.403271 4880 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.403318 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:29.903299904 +0000 UTC m=+19.414554276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.403979 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.408047 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.410327 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.413215 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.433755 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.437080 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.451082 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463082 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463119 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463175 4880 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463187 4880 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463196 4880 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463205 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463214 4880 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463223 4880 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463236 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463248 4880 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463256 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463265 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463274 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463284 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463300 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463309 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463319 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463327 4880 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463335 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463343 4880 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463351 4880 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463360 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463371 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463383 4880 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463397 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463408 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463421 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463435 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463444 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463455 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463464 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463473 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463481 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463489 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463497 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463506 4880 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463515 4880 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463524 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463533 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463543 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463552 4880 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463560 4880 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463569 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463577 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463586 4880 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463594 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463603 4880 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463612 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463620 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463629 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463638 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463646 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463655 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463664 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463672 4880 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463680 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463689 4880 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463698 4880 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463707 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463716 4880 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463725 4880 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463736 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463745 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463753 4880 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463791 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.463954 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.464022 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.477927 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.480587 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.499325 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.508909 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.525559 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.536350 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.545413 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-g45lh"] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.545853 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.546307 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9899k"] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.546511 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9899k" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.547885 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.547953 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.548087 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.548106 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.548284 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.548373 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.548380 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.548481 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.548547 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.561096 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.564308 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.564329 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.564338 4880 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.573407 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.597233 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.612685 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.622046 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.623128 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.631931 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 02:56:29 crc kubenswrapper[4880]: W1201 02:56:29.636057 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-3adec141db1ec5c9b574ff2808615d6793aae80170785938050f6974fb59d66d WatchSource:0}: Error finding container 3adec141db1ec5c9b574ff2808615d6793aae80170785938050f6974fb59d66d: Status 404 returned error can't find the container with id 3adec141db1ec5c9b574ff2808615d6793aae80170785938050f6974fb59d66d Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.638713 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.639823 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: W1201 02:56:29.641919 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-c2d9abdaf629b74318cea96972912ad7288f423e4679871bf1abd1896f0c2736 WatchSource:0}: Error finding container c2d9abdaf629b74318cea96972912ad7288f423e4679871bf1abd1896f0c2736: Status 404 returned error can't find the container with id c2d9abdaf629b74318cea96972912ad7288f423e4679871bf1abd1896f0c2736 Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.652292 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.659119 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: W1201 02:56:29.661250 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1b6fd123e52f0dd8ce86cfafbc7323b255ffe8075b8e93523a09c01e29e9254f WatchSource:0}: Error finding container 1b6fd123e52f0dd8ce86cfafbc7323b255ffe8075b8e93523a09c01e29e9254f: Status 404 returned error can't find the container with id 1b6fd123e52f0dd8ce86cfafbc7323b255ffe8075b8e93523a09c01e29e9254f Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.665659 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/057ec9cf-8406-4617-bda6-99517f6d2a41-mcd-auth-proxy-config\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.665691 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f54b\" (UniqueName: \"kubernetes.io/projected/4370efb6-7bd1-4363-9c25-4db445e54a28-kube-api-access-9f54b\") pod \"node-resolver-9899k\" (UID: \"4370efb6-7bd1-4363-9c25-4db445e54a28\") " pod="openshift-dns/node-resolver-9899k" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.665724 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9q9z\" (UniqueName: \"kubernetes.io/projected/057ec9cf-8406-4617-bda6-99517f6d2a41-kube-api-access-m9q9z\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.665750 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4370efb6-7bd1-4363-9c25-4db445e54a28-hosts-file\") pod \"node-resolver-9899k\" (UID: \"4370efb6-7bd1-4363-9c25-4db445e54a28\") " pod="openshift-dns/node-resolver-9899k" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.665785 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/057ec9cf-8406-4617-bda6-99517f6d2a41-rootfs\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.665799 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/057ec9cf-8406-4617-bda6-99517f6d2a41-proxy-tls\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.670428 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.678164 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.692843 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.700406 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.708332 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.715627 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.767062 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9q9z\" (UniqueName: \"kubernetes.io/projected/057ec9cf-8406-4617-bda6-99517f6d2a41-kube-api-access-m9q9z\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.767143 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/057ec9cf-8406-4617-bda6-99517f6d2a41-rootfs\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.767201 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/057ec9cf-8406-4617-bda6-99517f6d2a41-rootfs\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.767160 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4370efb6-7bd1-4363-9c25-4db445e54a28-hosts-file\") pod \"node-resolver-9899k\" (UID: \"4370efb6-7bd1-4363-9c25-4db445e54a28\") " pod="openshift-dns/node-resolver-9899k" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.767262 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/057ec9cf-8406-4617-bda6-99517f6d2a41-proxy-tls\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.767278 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/057ec9cf-8406-4617-bda6-99517f6d2a41-mcd-auth-proxy-config\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.767293 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f54b\" (UniqueName: \"kubernetes.io/projected/4370efb6-7bd1-4363-9c25-4db445e54a28-kube-api-access-9f54b\") pod \"node-resolver-9899k\" (UID: \"4370efb6-7bd1-4363-9c25-4db445e54a28\") " pod="openshift-dns/node-resolver-9899k" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.767342 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4370efb6-7bd1-4363-9c25-4db445e54a28-hosts-file\") pod \"node-resolver-9899k\" (UID: \"4370efb6-7bd1-4363-9c25-4db445e54a28\") " pod="openshift-dns/node-resolver-9899k" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.767893 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/057ec9cf-8406-4617-bda6-99517f6d2a41-mcd-auth-proxy-config\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.771803 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/057ec9cf-8406-4617-bda6-99517f6d2a41-proxy-tls\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.782394 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9q9z\" (UniqueName: \"kubernetes.io/projected/057ec9cf-8406-4617-bda6-99517f6d2a41-kube-api-access-m9q9z\") pod \"machine-config-daemon-g45lh\" (UID: \"057ec9cf-8406-4617-bda6-99517f6d2a41\") " pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.782934 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f54b\" (UniqueName: \"kubernetes.io/projected/4370efb6-7bd1-4363-9c25-4db445e54a28-kube-api-access-9f54b\") pod \"node-resolver-9899k\" (UID: \"4370efb6-7bd1-4363-9c25-4db445e54a28\") " pod="openshift-dns/node-resolver-9899k" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.783166 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.783418 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.783283 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.783511 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.783428 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.783571 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.859017 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.865081 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9899k" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.868459 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.868612 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:56:30.86859803 +0000 UTC m=+20.379852402 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:56:29 crc kubenswrapper[4880]: W1201 02:56:29.877587 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4370efb6_7bd1_4363_9c25_4db445e54a28.slice/crio-5bdf13c00712383605d1d432e59d1def11d309abd36360dc9038853fcdd71c28 WatchSource:0}: Error finding container 5bdf13c00712383605d1d432e59d1def11d309abd36360dc9038853fcdd71c28: Status 404 returned error can't find the container with id 5bdf13c00712383605d1d432e59d1def11d309abd36360dc9038853fcdd71c28 Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.895960 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551"} Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.896010 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3adec141db1ec5c9b574ff2808615d6793aae80170785938050f6974fb59d66d"} Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.899135 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-w7bw7"] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.899686 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5znrt"] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.900226 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.900271 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.900863 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-52bx6"] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.901564 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.902757 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.904795 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.904899 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.904967 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.905439 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.908185 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.908197 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.908188 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.908375 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.908635 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.908706 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.908787 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.908836 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.909042 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.910787 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.911182 4880 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4" exitCode=255 Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.911277 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4"} Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.923549 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"e682603a22d5df46fa5a227dd4114324c3917cb1d0db7833c04b5701e6bbe9b8"} Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.926211 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c2d9abdaf629b74318cea96972912ad7288f423e4679871bf1abd1896f0c2736"} Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.927757 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.936286 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9899k" event={"ID":"4370efb6-7bd1-4363-9c25-4db445e54a28","Type":"ContainerStarted","Data":"5bdf13c00712383605d1d432e59d1def11d309abd36360dc9038853fcdd71c28"} Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.944647 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.945057 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.945410 4880 scope.go:117] "RemoveContainer" containerID="bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.950904 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750"} Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.950933 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1b6fd123e52f0dd8ce86cfafbc7323b255ffe8075b8e93523a09c01e29e9254f"} Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.955748 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.967636 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969375 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-log-socket\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969413 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-config\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969430 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-script-lib\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969453 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-cnibin\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969480 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969500 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-systemd\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969517 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-cni-dir\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969536 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-var-lib-kubelet\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.969658 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.969701 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.969715 4880 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969725 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-kubelet\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969753 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-netns\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969778 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-node-log\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969793 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2a76648-c405-40a9-a0d4-3604ff888d39-cni-binary-copy\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969815 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-var-lib-cni-bin\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969836 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvnjs\" (UniqueName: \"kubernetes.io/projected/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-kube-api-access-wvnjs\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969857 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-daemon-config\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969907 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-var-lib-cni-multus\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969924 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfpj8\" (UniqueName: \"kubernetes.io/projected/6366d207-93fa-4b9f-ae70-0bab0b293db3-kube-api-access-mfpj8\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969940 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2a76648-c405-40a9-a0d4-3604ff888d39-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969956 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-run-netns\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969980 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-var-lib-openvswitch\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.969995 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-run-multus-certs\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970012 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-openvswitch\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970029 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-bin\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970048 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-netd\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970065 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-system-cni-dir\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970082 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970099 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-ovn-kubernetes\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970114 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-etc-openvswitch\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970130 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6366d207-93fa-4b9f-ae70-0bab0b293db3-cni-binary-copy\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970166 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-conf-dir\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.970216 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:30.970183618 +0000 UTC m=+20.481437990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970555 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-systemd-units\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970588 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p4pc\" (UniqueName: \"kubernetes.io/projected/f2a76648-c405-40a9-a0d4-3604ff888d39-kube-api-access-9p4pc\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970608 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-hostroot\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970630 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970646 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovn-node-metrics-cert\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.970737 4880 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970765 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-system-cni-dir\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970790 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-etc-kubernetes\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970816 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970833 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970850 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-cnibin\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970918 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-env-overrides\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970936 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-os-release\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970953 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-slash\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970969 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-socket-dir-parent\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.970984 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-run-k8s-cni-cncf-io\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.971000 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-os-release\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.971015 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-ovn\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.971052 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.971102 4880 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.971105 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:30.97109024 +0000 UTC m=+20.482344612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.971144 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:30.971131221 +0000 UTC m=+20.482385593 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.971199 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.971209 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.971219 4880 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:29 crc kubenswrapper[4880]: E1201 02:56:29.971240 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:30.971234194 +0000 UTC m=+20.482488566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:29 crc kubenswrapper[4880]: I1201 02:56:29.986469 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.001622 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.013230 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.024255 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.034232 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.048032 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.060490 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.071836 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.071982 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-cni-dir\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072010 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-var-lib-kubelet\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072028 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2a76648-c405-40a9-a0d4-3604ff888d39-cni-binary-copy\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072045 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-kubelet\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072061 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-netns\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072081 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-node-log\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072099 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvnjs\" (UniqueName: \"kubernetes.io/projected/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-kube-api-access-wvnjs\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072114 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-var-lib-cni-bin\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072133 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-daemon-config\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072160 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2a76648-c405-40a9-a0d4-3604ff888d39-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072174 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-var-lib-cni-multus\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072193 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfpj8\" (UniqueName: \"kubernetes.io/projected/6366d207-93fa-4b9f-ae70-0bab0b293db3-kube-api-access-mfpj8\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072208 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-var-lib-openvswitch\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072227 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-run-netns\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072241 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-openvswitch\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072256 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-run-multus-certs\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072270 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072285 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-ovn-kubernetes\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072301 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-bin\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072319 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-netd\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072333 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-system-cni-dir\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072369 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-systemd-units\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072385 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-etc-openvswitch\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072401 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6366d207-93fa-4b9f-ae70-0bab0b293db3-cni-binary-copy\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072417 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-conf-dir\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072441 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovn-node-metrics-cert\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072456 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p4pc\" (UniqueName: \"kubernetes.io/projected/f2a76648-c405-40a9-a0d4-3604ff888d39-kube-api-access-9p4pc\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072472 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-hostroot\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072500 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-system-cni-dir\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072516 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-etc-kubernetes\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072530 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-env-overrides\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072543 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-os-release\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072559 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-cnibin\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072573 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-slash\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072587 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-socket-dir-parent\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072601 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-run-k8s-cni-cncf-io\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072616 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-ovn\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072633 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072647 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-os-release\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072664 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-cnibin\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072687 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-systemd\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072706 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-netd\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072712 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-log-socket\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072748 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-log-socket\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072752 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-config\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072770 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-script-lib\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072787 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-system-cni-dir\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072835 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-etc-openvswitch\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072843 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-systemd-units\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.072973 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-cni-dir\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.073003 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-var-lib-kubelet\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.073508 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6366d207-93fa-4b9f-ae70-0bab0b293db3-cni-binary-copy\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.073640 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2a76648-c405-40a9-a0d4-3604ff888d39-cni-binary-copy\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.073682 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-node-log\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.073696 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-run-netns\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.073724 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-openvswitch\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.073748 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-run-multus-certs\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.073930 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-var-lib-cni-bin\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.073975 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.074009 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-ovn-kubernetes\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.074034 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-bin\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.074374 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-daemon-config\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.074567 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-config\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.075029 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-script-lib\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.075383 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-conf-dir\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.075466 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f2a76648-c405-40a9-a0d4-3604ff888d39-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.075504 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-var-lib-cni-multus\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.075772 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-multus-socket-dir-parent\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.075911 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-hostroot\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.075946 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-system-cni-dir\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.075960 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-etc-kubernetes\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076132 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-cnibin\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076174 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-os-release\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076196 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-slash\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076216 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2a76648-c405-40a9-a0d4-3604ff888d39-cnibin\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076233 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-host-run-k8s-cni-cncf-io\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076247 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-ovn\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076262 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076271 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-env-overrides\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076287 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6366d207-93fa-4b9f-ae70-0bab0b293db3-os-release\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076301 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-var-lib-openvswitch\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076308 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-systemd\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076324 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-kubelet\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.076331 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-netns\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.085464 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.087448 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovn-node-metrics-cert\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.098704 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvnjs\" (UniqueName: \"kubernetes.io/projected/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-kube-api-access-wvnjs\") pod \"ovnkube-node-52bx6\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.100680 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfpj8\" (UniqueName: \"kubernetes.io/projected/6366d207-93fa-4b9f-ae70-0bab0b293db3-kube-api-access-mfpj8\") pod \"multus-5znrt\" (UID: \"6366d207-93fa-4b9f-ae70-0bab0b293db3\") " pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.101323 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p4pc\" (UniqueName: \"kubernetes.io/projected/f2a76648-c405-40a9-a0d4-3604ff888d39-kube-api-access-9p4pc\") pod \"multus-additional-cni-plugins-w7bw7\" (UID: \"f2a76648-c405-40a9-a0d4-3604ff888d39\") " pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.109742 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.129976 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.144558 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.158367 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.170057 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.184129 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.198397 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.205308 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.215362 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.229068 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5znrt" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.239043 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.257727 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:30 crc kubenswrapper[4880]: W1201 02:56:30.305988 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2a76648_c405_40a9_a0d4_3604ff888d39.slice/crio-9fe28a3d0133ecae185ce918b2a2bc5f71e10d80036ba34bc4e98bad17fff6ec WatchSource:0}: Error finding container 9fe28a3d0133ecae185ce918b2a2bc5f71e10d80036ba34bc4e98bad17fff6ec: Status 404 returned error can't find the container with id 9fe28a3d0133ecae185ce918b2a2bc5f71e10d80036ba34bc4e98bad17fff6ec Dec 01 02:56:30 crc kubenswrapper[4880]: W1201 02:56:30.306411 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e4d730b_5ca7_46cf_a62a_3c4a54bc1697.slice/crio-f92ce00341c3592a1e69a9c9ea80984ccaf07e2622c309e76def046947e2a523 WatchSource:0}: Error finding container f92ce00341c3592a1e69a9c9ea80984ccaf07e2622c309e76def046947e2a523: Status 404 returned error can't find the container with id f92ce00341c3592a1e69a9c9ea80984ccaf07e2622c309e76def046947e2a523 Dec 01 02:56:30 crc kubenswrapper[4880]: W1201 02:56:30.308313 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6366d207_93fa_4b9f_ae70_0bab0b293db3.slice/crio-c8186d954ed151f6aed1a8f1a8e485ce14939b1765d391274f15f9610120e5aa WatchSource:0}: Error finding container c8186d954ed151f6aed1a8f1a8e485ce14939b1765d391274f15f9610120e5aa: Status 404 returned error can't find the container with id c8186d954ed151f6aed1a8f1a8e485ce14939b1765d391274f15f9610120e5aa Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.787765 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.788905 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.789618 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.790292 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.790892 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.791434 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.792054 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.792641 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.793345 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.793884 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.794388 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.795047 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.795559 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.796214 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.796792 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.799707 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.800467 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.801359 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.801936 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.809786 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.810453 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.810546 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.811128 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.812171 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.812881 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.813770 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.814395 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.815394 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.815886 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.816881 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.817524 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.818036 4880 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.818138 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.821018 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.821550 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.822554 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.824283 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.825011 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.826324 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.827026 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.827672 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.828185 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.828808 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.829692 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.831915 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.832441 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.833849 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.834576 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.836728 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.837306 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.842357 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.843621 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.845508 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.846141 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.846616 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.880207 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.880351 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:56:32.880327604 +0000 UTC m=+22.391581976 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.884816 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.921080 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.941552 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.954732 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9899k" event={"ID":"4370efb6-7bd1-4363-9c25-4db445e54a28","Type":"ContainerStarted","Data":"aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.955972 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.956971 4880 generic.go:334] "Generic (PLEG): container finished" podID="f2a76648-c405-40a9-a0d4-3604ff888d39" containerID="90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232" exitCode=0 Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.957020 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" event={"ID":"f2a76648-c405-40a9-a0d4-3604ff888d39","Type":"ContainerDied","Data":"90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.957035 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" event={"ID":"f2a76648-c405-40a9-a0d4-3604ff888d39","Type":"ContainerStarted","Data":"9fe28a3d0133ecae185ce918b2a2bc5f71e10d80036ba34bc4e98bad17fff6ec"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.959236 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5znrt" event={"ID":"6366d207-93fa-4b9f-ae70-0bab0b293db3","Type":"ContainerStarted","Data":"1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.959261 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5znrt" event={"ID":"6366d207-93fa-4b9f-ae70-0bab0b293db3","Type":"ContainerStarted","Data":"c8186d954ed151f6aed1a8f1a8e485ce14939b1765d391274f15f9610120e5aa"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.961115 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.962545 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.963516 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.963690 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.963712 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.965093 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739" exitCode=0 Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.965131 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.965153 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"f92ce00341c3592a1e69a9c9ea80984ccaf07e2622c309e76def046947e2a523"} Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.983806 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.983857 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.983935 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.983974 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:30 crc kubenswrapper[4880]: I1201 02:56:30.983998 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984054 4880 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984102 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:32.984087635 +0000 UTC m=+22.495341997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984114 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984129 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984140 4880 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984182 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:32.984169297 +0000 UTC m=+22.495423669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984223 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984230 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984238 4880 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984257 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:32.984249839 +0000 UTC m=+22.495504211 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984282 4880 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:30 crc kubenswrapper[4880]: E1201 02:56:30.984300 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:32.98429507 +0000 UTC m=+22.495549442 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.020599 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.108187 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.133197 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.160083 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.189383 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.203982 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.211903 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.230011 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.261909 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.282486 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.304739 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.316422 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.329780 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.350674 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.369389 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.397829 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.420243 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.458675 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.489780 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.509289 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.540771 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.783744 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.783770 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:31 crc kubenswrapper[4880]: E1201 02:56:31.783879 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.784181 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:31 crc kubenswrapper[4880]: E1201 02:56:31.784269 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:31 crc kubenswrapper[4880]: E1201 02:56:31.784314 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.969912 4880 generic.go:334] "Generic (PLEG): container finished" podID="f2a76648-c405-40a9-a0d4-3604ff888d39" containerID="4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968" exitCode=0 Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.969976 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" event={"ID":"f2a76648-c405-40a9-a0d4-3604ff888d39","Type":"ContainerDied","Data":"4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968"} Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.973739 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70"} Dec 01 02:56:31 crc kubenswrapper[4880]: I1201 02:56:31.984896 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.005477 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.029557 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.046697 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.073546 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.096990 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.108915 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.120229 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.131351 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.141505 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.150449 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.162743 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.174654 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.188101 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.204601 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.209080 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.217808 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.222360 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.222750 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.229285 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.243952 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.255818 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.265634 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.276834 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.292737 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.306783 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.314566 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.327852 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.338288 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.352814 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.366313 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.380342 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.412683 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.429068 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.444983 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.460650 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.474900 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.487352 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.509735 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.528606 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.538933 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.555148 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.568381 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.911549 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:32 crc kubenswrapper[4880]: E1201 02:56:32.911737 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:56:36.911707953 +0000 UTC m=+26.422962365 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.985696 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.985817 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.988969 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.989024 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.989042 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.989060 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.993220 4880 generic.go:334] "Generic (PLEG): container finished" podID="f2a76648-c405-40a9-a0d4-3604ff888d39" containerID="9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606" exitCode=0 Dec 01 02:56:32 crc kubenswrapper[4880]: I1201 02:56:32.993348 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" event={"ID":"f2a76648-c405-40a9-a0d4-3604ff888d39","Type":"ContainerDied","Data":"9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606"} Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.012573 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.012808 4880 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.013134 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.013402 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.013423 4880 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.013485 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:37.013460935 +0000 UTC m=+26.524715347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.013522 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:37.013508906 +0000 UTC m=+26.524763318 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.013729 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.013755 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.013775 4880 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.013822 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:37.013808204 +0000 UTC m=+26.525062616 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.014492 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.015218 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.015410 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.015440 4880 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.015510 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:37.015485524 +0000 UTC m=+26.526739926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.020739 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.052363 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.068327 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.086628 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.111660 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.148140 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.185347 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.209959 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.225169 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.237782 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.246571 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.257910 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.268055 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.285782 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.566233 4880 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.569287 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.569344 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.569361 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.569451 4880 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.579973 4880 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.580355 4880 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.581839 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.581949 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.581975 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.582009 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.582031 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:33Z","lastTransitionTime":"2025-12-01T02:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.604286 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.609347 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.609429 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.609454 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.609485 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.609507 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:33Z","lastTransitionTime":"2025-12-01T02:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.629316 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.633835 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.633935 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.633957 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.633991 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.634011 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:33Z","lastTransitionTime":"2025-12-01T02:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.653611 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.659349 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.659411 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.659430 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.659459 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.659482 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:33Z","lastTransitionTime":"2025-12-01T02:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.688215 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.693110 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.693178 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.693198 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.693223 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.693241 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:33Z","lastTransitionTime":"2025-12-01T02:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.709657 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.709807 4880 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.711666 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.711712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.711725 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.711742 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.711754 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:33Z","lastTransitionTime":"2025-12-01T02:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.783917 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.784005 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.784036 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.784118 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.784279 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:33 crc kubenswrapper[4880]: E1201 02:56:33.784368 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.813856 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.813915 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.813926 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.813945 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.813957 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:33Z","lastTransitionTime":"2025-12-01T02:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.916413 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.916474 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.916493 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.916516 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:33 crc kubenswrapper[4880]: I1201 02:56:33.916533 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:33Z","lastTransitionTime":"2025-12-01T02:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.000360 4880 generic.go:334] "Generic (PLEG): container finished" podID="f2a76648-c405-40a9-a0d4-3604ff888d39" containerID="b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86" exitCode=0 Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.000409 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" event={"ID":"f2a76648-c405-40a9-a0d4-3604ff888d39","Type":"ContainerDied","Data":"b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.019530 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.019595 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.019613 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.019724 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.019746 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.022188 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.047547 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.066784 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.097457 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.113189 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.122070 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.122103 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.122115 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.122135 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.122146 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.143267 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.176735 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.192202 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.205487 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.217818 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.227486 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.227537 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.227551 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.227569 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.227587 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.233340 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.249122 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.261285 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.281669 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.330211 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.330245 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.330255 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.330270 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.330281 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.433627 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.433712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.433731 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.433756 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.433774 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.537121 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.537162 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.537170 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.537183 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.537192 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.645304 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.645366 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.645386 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.645414 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.645431 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.747824 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.747902 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.747923 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.747947 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.747964 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.851018 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.851072 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.851088 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.851108 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.851124 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.957921 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.957993 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.958014 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.958042 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:34 crc kubenswrapper[4880]: I1201 02:56:34.958111 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:34Z","lastTransitionTime":"2025-12-01T02:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.008152 4880 generic.go:334] "Generic (PLEG): container finished" podID="f2a76648-c405-40a9-a0d4-3604ff888d39" containerID="02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e" exitCode=0 Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.008237 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" event={"ID":"f2a76648-c405-40a9-a0d4-3604ff888d39","Type":"ContainerDied","Data":"02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.016123 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.035970 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.059010 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.060833 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.060927 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.060954 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.060984 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.061007 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.081930 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.102690 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.139672 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.163217 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.164235 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.164283 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.164304 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.164328 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.164384 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.183654 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.202957 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.225492 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.242506 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.266618 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.266640 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.266648 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.266660 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.266668 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.269044 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.288688 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.303916 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.327537 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.369274 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.369317 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.369346 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.369363 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.369375 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.472312 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.472353 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.472366 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.472383 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.472395 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.578224 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.578440 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.578558 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.579122 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.579326 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.685620 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.686018 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.686158 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.686348 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.686617 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.783654 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.783703 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.783745 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:35 crc kubenswrapper[4880]: E1201 02:56:35.783828 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:35 crc kubenswrapper[4880]: E1201 02:56:35.783922 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:35 crc kubenswrapper[4880]: E1201 02:56:35.784054 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.790469 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.790512 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.790524 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.790539 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.790548 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.876503 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-sqgcx"] Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.877595 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.880008 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.880410 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.880855 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.881527 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.895036 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.895071 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.895080 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.895095 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.895105 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.901663 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.915976 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.932771 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.947126 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.962060 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bbed5d89-6221-4f4b-af2d-55e677d62027-serviceca\") pod \"node-ca-sqgcx\" (UID: \"bbed5d89-6221-4f4b-af2d-55e677d62027\") " pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.962119 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbed5d89-6221-4f4b-af2d-55e677d62027-host\") pod \"node-ca-sqgcx\" (UID: \"bbed5d89-6221-4f4b-af2d-55e677d62027\") " pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.962240 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ggpx\" (UniqueName: \"kubernetes.io/projected/bbed5d89-6221-4f4b-af2d-55e677d62027-kube-api-access-8ggpx\") pod \"node-ca-sqgcx\" (UID: \"bbed5d89-6221-4f4b-af2d-55e677d62027\") " pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.968089 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.987260 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.997618 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.997655 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.997667 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.997688 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:35 crc kubenswrapper[4880]: I1201 02:56:35.997700 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:35Z","lastTransitionTime":"2025-12-01T02:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.006817 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.024520 4880 generic.go:334] "Generic (PLEG): container finished" podID="f2a76648-c405-40a9-a0d4-3604ff888d39" containerID="309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15" exitCode=0 Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.024572 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" event={"ID":"f2a76648-c405-40a9-a0d4-3604ff888d39","Type":"ContainerDied","Data":"309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.030929 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.049395 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.063286 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bbed5d89-6221-4f4b-af2d-55e677d62027-serviceca\") pod \"node-ca-sqgcx\" (UID: \"bbed5d89-6221-4f4b-af2d-55e677d62027\") " pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.063341 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbed5d89-6221-4f4b-af2d-55e677d62027-host\") pod \"node-ca-sqgcx\" (UID: \"bbed5d89-6221-4f4b-af2d-55e677d62027\") " pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.063410 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ggpx\" (UniqueName: \"kubernetes.io/projected/bbed5d89-6221-4f4b-af2d-55e677d62027-kube-api-access-8ggpx\") pod \"node-ca-sqgcx\" (UID: \"bbed5d89-6221-4f4b-af2d-55e677d62027\") " pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.063568 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bbed5d89-6221-4f4b-af2d-55e677d62027-host\") pod \"node-ca-sqgcx\" (UID: \"bbed5d89-6221-4f4b-af2d-55e677d62027\") " pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.065054 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bbed5d89-6221-4f4b-af2d-55e677d62027-serviceca\") pod \"node-ca-sqgcx\" (UID: \"bbed5d89-6221-4f4b-af2d-55e677d62027\") " pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.071140 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.101523 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.101554 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.101565 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.101582 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.101595 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:36Z","lastTransitionTime":"2025-12-01T02:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.103106 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ggpx\" (UniqueName: \"kubernetes.io/projected/bbed5d89-6221-4f4b-af2d-55e677d62027-kube-api-access-8ggpx\") pod \"node-ca-sqgcx\" (UID: \"bbed5d89-6221-4f4b-af2d-55e677d62027\") " pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.111669 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.133916 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.153303 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.166123 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.192572 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.203016 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sqgcx" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.204919 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.204964 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.204979 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.204997 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.205010 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:36Z","lastTransitionTime":"2025-12-01T02:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.219445 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.232084 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.250268 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.263649 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.279211 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.293364 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.308172 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.308213 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.308226 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.308245 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.308259 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:36Z","lastTransitionTime":"2025-12-01T02:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.314309 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.338319 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.353809 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.367417 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.381861 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.398311 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.410840 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.410857 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.410865 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.410891 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.410900 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:36Z","lastTransitionTime":"2025-12-01T02:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.413733 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.425775 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.442812 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:36Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.513941 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.514004 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.514023 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.514049 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.514068 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:36Z","lastTransitionTime":"2025-12-01T02:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.616763 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.616822 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.616834 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.616852 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.616863 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:36Z","lastTransitionTime":"2025-12-01T02:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.719911 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.719953 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.719964 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.719981 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.719994 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:36Z","lastTransitionTime":"2025-12-01T02:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.822620 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.822682 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.822702 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.822728 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.822752 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:36Z","lastTransitionTime":"2025-12-01T02:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.925685 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.925983 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.925994 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.926010 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.926023 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:36Z","lastTransitionTime":"2025-12-01T02:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:36 crc kubenswrapper[4880]: I1201 02:56:36.974692 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:36 crc kubenswrapper[4880]: E1201 02:56:36.974947 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:56:44.974906223 +0000 UTC m=+34.486160635 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.027977 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.028023 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.028063 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.028095 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.028118 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.035424 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.038706 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sqgcx" event={"ID":"bbed5d89-6221-4f4b-af2d-55e677d62027","Type":"ContainerStarted","Data":"48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.038758 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sqgcx" event={"ID":"bbed5d89-6221-4f4b-af2d-55e677d62027","Type":"ContainerStarted","Data":"c7ace295cab57d17545a52fe8f3328a2c2c5dfee5666c42b9ceaf6d809cfa990"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.047092 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" event={"ID":"f2a76648-c405-40a9-a0d4-3604ff888d39","Type":"ContainerStarted","Data":"ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.063534 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.075551 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.075634 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.075676 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.075710 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.075831 4880 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.075902 4880 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.075914 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.075951 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.075968 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.075979 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.075994 4880 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.075999 4880 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.075914 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:45.075898217 +0000 UTC m=+34.587152589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.076089 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:45.076061071 +0000 UTC m=+34.587315493 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.076136 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:45.076117822 +0000 UTC m=+34.587372324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.076166 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 02:56:45.076149973 +0000 UTC m=+34.587404505 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.079679 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.102322 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.126474 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.130293 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.130333 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.130348 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.130370 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.130383 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.145898 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.167865 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.191723 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.220956 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.233514 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.233558 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.233569 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.233587 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.233598 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.236200 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.251231 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.264373 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.282452 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.294820 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.308469 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.327644 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.335460 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.335681 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.335834 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.335998 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.336120 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.344673 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.360616 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.374452 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.392788 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.413559 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.439822 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.439860 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.439885 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.439904 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.439914 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.443398 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.460045 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.481621 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.501985 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.532211 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.542275 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.542551 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.542641 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.542726 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.542805 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.551104 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.567455 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.583924 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.602976 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.617111 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:37Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.644802 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.644839 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.644851 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.644918 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.644931 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.747041 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.747098 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.747115 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.747137 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.747154 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.783481 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.783490 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.783751 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.783624 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.783980 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:37 crc kubenswrapper[4880]: E1201 02:56:37.784078 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.850026 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.850267 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.850328 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.850397 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.850465 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.953666 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.953716 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.953732 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.953754 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:37 crc kubenswrapper[4880]: I1201 02:56:37.953771 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:37Z","lastTransitionTime":"2025-12-01T02:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.051615 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.057587 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.057657 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.057680 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.057707 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.057736 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.098306 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.098636 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.123527 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.147798 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.160916 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.160976 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.160993 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.161018 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.161035 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.167488 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.189513 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.205051 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.220687 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.238303 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.257300 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.263376 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.263436 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.263454 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.263480 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.263498 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.271999 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.289373 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.306523 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.323350 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.338282 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.366586 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.366645 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.366663 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.366688 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.366711 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.371319 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.394108 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.412400 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.426653 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.442900 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.456545 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.469863 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.469950 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.469972 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.469998 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.470016 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.479964 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.499565 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.518447 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.535942 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.555516 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.573184 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.573228 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.573241 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.573259 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.573272 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.575602 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.607591 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.631553 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.651573 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.675927 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.676005 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.676028 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.676065 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.676090 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.678443 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:38Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.781902 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.781971 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.781985 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.782006 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.782018 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.884544 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.884587 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.884598 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.884613 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.884624 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.987299 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.987358 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.987376 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.987399 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:38 crc kubenswrapper[4880]: I1201 02:56:38.987415 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:38Z","lastTransitionTime":"2025-12-01T02:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.054613 4880 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.055224 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.077663 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.089012 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.089041 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.089049 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.089062 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.089072 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:39Z","lastTransitionTime":"2025-12-01T02:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.099537 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.114113 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.125944 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.135101 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.147689 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.160708 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.172596 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.186843 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.191455 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.191499 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.191516 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.191537 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.191554 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:39Z","lastTransitionTime":"2025-12-01T02:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.203794 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.215932 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.231349 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.245207 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.258593 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.276119 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.293714 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.293761 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.293773 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.293793 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.293804 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:39Z","lastTransitionTime":"2025-12-01T02:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.294262 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:39Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.396284 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.396335 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.396346 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.396368 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.396380 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:39Z","lastTransitionTime":"2025-12-01T02:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.499814 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.499853 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.499861 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.499886 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.499896 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:39Z","lastTransitionTime":"2025-12-01T02:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.602563 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.602628 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.602651 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.602680 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.602699 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:39Z","lastTransitionTime":"2025-12-01T02:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.706036 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.706092 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.706116 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.706144 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.706162 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:39Z","lastTransitionTime":"2025-12-01T02:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.783310 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.783373 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.783325 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:39 crc kubenswrapper[4880]: E1201 02:56:39.783526 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:39 crc kubenswrapper[4880]: E1201 02:56:39.783649 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:39 crc kubenswrapper[4880]: E1201 02:56:39.783907 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.808541 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.808642 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.808661 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.808687 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.808707 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:39Z","lastTransitionTime":"2025-12-01T02:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.912251 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.912336 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.912399 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.912451 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:39 crc kubenswrapper[4880]: I1201 02:56:39.912482 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:39Z","lastTransitionTime":"2025-12-01T02:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.016197 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.016565 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.016773 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.017044 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.017250 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.062659 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/0.log" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.068034 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a" exitCode=1 Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.068098 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.069223 4880 scope.go:117] "RemoveContainer" containerID="2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.096549 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.119451 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.120441 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.120562 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.120645 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.120724 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.120793 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.140767 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.162066 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.194159 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:39Z\\\",\\\"message\\\":\\\"208] Removed *v1.Pod event handler 6\\\\nI1201 02:56:39.369493 6087 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 02:56:39.369623 6087 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.369788 6087 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.369805 6087 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:39.369818 6087 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:39.369832 6087 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:39.369921 6087 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.370038 6087 factory.go:656] Stopping watch factory\\\\nI1201 02:56:39.370069 6087 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.370153 6087 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.370816 6087 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.226456 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.226510 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.226534 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.226561 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.226581 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.237195 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.257262 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.280592 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.301135 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.319221 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.331568 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.331639 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.331663 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.331689 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.331707 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.340601 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.357067 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.382631 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.407797 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.422603 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.434628 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.434670 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.434687 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.434711 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.434726 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.566006 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.566043 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.566052 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.566066 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.566076 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.669074 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.669118 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.669131 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.669147 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.669159 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.771966 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.772002 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.772013 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.772027 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.772050 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.809072 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.826628 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.847504 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.873334 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.874751 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.874921 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.874993 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.875071 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.875128 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.894906 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.915341 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.936631 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.956016 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.974306 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.976944 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.977055 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.977123 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.977181 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.977234 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:40Z","lastTransitionTime":"2025-12-01T02:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:40 crc kubenswrapper[4880]: I1201 02:56:40.995741 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:39Z\\\",\\\"message\\\":\\\"208] Removed *v1.Pod event handler 6\\\\nI1201 02:56:39.369493 6087 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 02:56:39.369623 6087 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.369788 6087 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.369805 6087 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:39.369818 6087 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:39.369832 6087 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:39.369921 6087 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.370038 6087 factory.go:656] Stopping watch factory\\\\nI1201 02:56:39.370069 6087 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.370153 6087 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.370816 6087 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.018672 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.033137 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.044678 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.057647 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.072600 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/1.log" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.073168 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/0.log" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.074828 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.076573 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5" exitCode=1 Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.076670 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5"} Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.076737 4880 scope.go:117] "RemoveContainer" containerID="2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.077109 4880 scope.go:117] "RemoveContainer" containerID="176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5" Dec 01 02:56:41 crc kubenswrapper[4880]: E1201 02:56:41.077233 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.079024 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.079046 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.079054 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.079066 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.079076 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:41Z","lastTransitionTime":"2025-12-01T02:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.092016 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.101766 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.112533 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.126653 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.139224 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.151697 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.178799 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2955fb6594ed3b4896373d928bdc0878943b8c6976e125f1c07467bdff5f5b2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:39Z\\\",\\\"message\\\":\\\"208] Removed *v1.Pod event handler 6\\\\nI1201 02:56:39.369493 6087 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1201 02:56:39.369623 6087 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.369788 6087 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.369805 6087 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:39.369818 6087 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:39.369832 6087 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:39.369921 6087 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.370038 6087 factory.go:656] Stopping watch factory\\\\nI1201 02:56:39.370069 6087 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.370153 6087 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 02:56:39.370816 6087 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.181562 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.181584 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.181592 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.181623 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.181633 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:41Z","lastTransitionTime":"2025-12-01T02:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.202373 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.215948 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.227418 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.239791 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.253556 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.265010 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.277078 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.285128 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.285184 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.285207 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.285239 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.285264 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:41Z","lastTransitionTime":"2025-12-01T02:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.297147 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.388280 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.388333 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.388344 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.388367 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.388381 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:41Z","lastTransitionTime":"2025-12-01T02:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.491383 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.491450 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.491468 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.491498 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.491516 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:41Z","lastTransitionTime":"2025-12-01T02:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.595050 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.595126 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.595145 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.595177 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.595195 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:41Z","lastTransitionTime":"2025-12-01T02:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.698773 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.699046 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.699161 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.699252 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.699336 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:41Z","lastTransitionTime":"2025-12-01T02:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.783332 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:41 crc kubenswrapper[4880]: E1201 02:56:41.783687 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.783376 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:41 crc kubenswrapper[4880]: E1201 02:56:41.783975 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.783372 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:41 crc kubenswrapper[4880]: E1201 02:56:41.784223 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.801786 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.801830 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.801842 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.801862 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.801890 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:41Z","lastTransitionTime":"2025-12-01T02:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.904828 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.905224 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.905398 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.905583 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:41 crc kubenswrapper[4880]: I1201 02:56:41.905750 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:41Z","lastTransitionTime":"2025-12-01T02:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.008797 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.009181 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.009315 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.009479 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.009634 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.083996 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/1.log" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.089405 4880 scope.go:117] "RemoveContainer" containerID="176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5" Dec 01 02:56:42 crc kubenswrapper[4880]: E1201 02:56:42.089676 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.108450 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.114124 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.114212 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.114231 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.114858 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.115126 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.128639 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.164173 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.182904 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.197198 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.218515 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.218624 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.218652 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.218738 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.218760 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.220711 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.240269 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.263716 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.297288 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.322695 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.322760 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.322777 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.322803 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.322820 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.325942 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.345792 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.361562 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.379558 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.402332 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.426848 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.427396 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.427410 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.427428 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.427439 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.440996 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:42Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.530741 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.530789 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.530806 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.530829 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.530846 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.633980 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.634067 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.634082 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.634100 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.634112 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.736482 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.736556 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.736573 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.736599 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.736616 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.840229 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.840295 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.840313 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.840365 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.840386 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.943792 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.943860 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.943910 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.943937 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:42 crc kubenswrapper[4880]: I1201 02:56:42.943957 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:42Z","lastTransitionTime":"2025-12-01T02:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.046681 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.046739 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.046793 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.046820 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.046862 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.151732 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.151792 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.151810 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.151837 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.151855 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.255060 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.255117 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.255133 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.255156 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.255173 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.358151 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.358190 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.358209 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.358231 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.358249 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.461549 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.461634 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.461654 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.461696 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.461718 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.564820 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.564901 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.564919 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.564947 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.564964 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.585284 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4"] Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.586184 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.590091 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.591765 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.625654 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.646791 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.668067 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.668294 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.668745 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.669130 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.669520 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.669969 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.688983 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.710615 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.732988 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.740262 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f069f0aa-c376-4cd2-91bb-a5563130fabc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.740498 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f069f0aa-c376-4cd2-91bb-a5563130fabc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.740796 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvdkt\" (UniqueName: \"kubernetes.io/projected/f069f0aa-c376-4cd2-91bb-a5563130fabc-kube-api-access-mvdkt\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.741040 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f069f0aa-c376-4cd2-91bb-a5563130fabc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.750718 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.774533 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.774613 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.774638 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.774666 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.774686 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.775769 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.783072 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.783172 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.783592 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:43 crc kubenswrapper[4880]: E1201 02:56:43.783955 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:43 crc kubenswrapper[4880]: E1201 02:56:43.784259 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:43 crc kubenswrapper[4880]: E1201 02:56:43.784428 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.799312 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.813001 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.813260 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.813386 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.813500 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.813616 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.814100 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: E1201 02:56:43.839947 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.841037 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.841796 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f069f0aa-c376-4cd2-91bb-a5563130fabc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.841849 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f069f0aa-c376-4cd2-91bb-a5563130fabc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.841924 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvdkt\" (UniqueName: \"kubernetes.io/projected/f069f0aa-c376-4cd2-91bb-a5563130fabc-kube-api-access-mvdkt\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.841984 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f069f0aa-c376-4cd2-91bb-a5563130fabc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.843132 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f069f0aa-c376-4cd2-91bb-a5563130fabc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.843182 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f069f0aa-c376-4cd2-91bb-a5563130fabc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.844838 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.844891 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.844908 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.844929 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.844944 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.851024 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f069f0aa-c376-4cd2-91bb-a5563130fabc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.865338 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: E1201 02:56:43.869335 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.874819 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.875000 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.875111 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.875240 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.875343 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.876588 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvdkt\" (UniqueName: \"kubernetes.io/projected/f069f0aa-c376-4cd2-91bb-a5563130fabc-kube-api-access-mvdkt\") pod \"ovnkube-control-plane-749d76644c-ctqw4\" (UID: \"f069f0aa-c376-4cd2-91bb-a5563130fabc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.886657 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: E1201 02:56:43.893117 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.897079 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.897130 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.897143 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.897161 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.897172 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.904918 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.909814 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" Dec 01 02:56:43 crc kubenswrapper[4880]: E1201 02:56:43.918032 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.924082 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.924127 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.924144 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.924164 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.924176 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.944558 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: E1201 02:56:43.948826 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:43 crc kubenswrapper[4880]: E1201 02:56:43.949081 4880 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.950887 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.950914 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.950923 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.950941 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.950954 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:43Z","lastTransitionTime":"2025-12-01T02:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:43 crc kubenswrapper[4880]: I1201 02:56:43.958037 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:43Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.053588 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.053637 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.053646 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.053668 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.053682 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.097980 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" event={"ID":"f069f0aa-c376-4cd2-91bb-a5563130fabc","Type":"ContainerStarted","Data":"078445eae8ab7684ecb692cb192b02af9b53824b1c0fdef76af26366e5d1380f"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.156544 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.156649 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.156675 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.156714 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.156751 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.260335 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.260391 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.260411 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.260439 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.260457 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.344170 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-chtvv"] Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.344818 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:44 crc kubenswrapper[4880]: E1201 02:56:44.344942 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.363791 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.363913 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.363938 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.363966 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.363987 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.375392 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.392606 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.417536 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.433605 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.451290 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-925xk\" (UniqueName: \"kubernetes.io/projected/60f88b82-c5e9-4f47-91c1-4e78498b481e-kube-api-access-925xk\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.451402 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.452717 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.466364 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.466426 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.466439 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.466457 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.466468 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.469859 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.490271 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.510566 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.552211 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.552264 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-925xk\" (UniqueName: \"kubernetes.io/projected/60f88b82-c5e9-4f47-91c1-4e78498b481e-kube-api-access-925xk\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:44 crc kubenswrapper[4880]: E1201 02:56:44.552431 4880 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.552299 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: E1201 02:56:44.552523 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs podName:60f88b82-c5e9-4f47-91c1-4e78498b481e nodeName:}" failed. No retries permitted until 2025-12-01 02:56:45.052494326 +0000 UTC m=+34.563748728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs") pod "network-metrics-daemon-chtvv" (UID: "60f88b82-c5e9-4f47-91c1-4e78498b481e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.574852 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.575541 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.575571 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.575603 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.575625 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.579953 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.591836 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-925xk\" (UniqueName: \"kubernetes.io/projected/60f88b82-c5e9-4f47-91c1-4e78498b481e-kube-api-access-925xk\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.598324 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.617679 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.640274 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.655375 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.671306 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.679021 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.679069 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.679083 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.679107 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.679123 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.692815 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.725123 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.781433 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.781474 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.781485 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.781512 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.781525 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.790770 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.814476 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.826614 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.839392 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.849854 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.863316 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.880690 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.884832 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.884861 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.884899 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.884915 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.884927 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.891615 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.910076 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.930604 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.941671 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.956844 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.973227 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.988233 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.993395 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.993592 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.993660 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.993767 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.993841 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:44Z","lastTransitionTime":"2025-12-01T02:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:44 crc kubenswrapper[4880]: I1201 02:56:44.999858 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:44Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.015778 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.027265 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.038332 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.056650 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.056831 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:57:01.05680538 +0000 UTC m=+50.568059752 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.056945 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.057097 4880 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.057144 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs podName:60f88b82-c5e9-4f47-91c1-4e78498b481e nodeName:}" failed. No retries permitted until 2025-12-01 02:56:46.057136808 +0000 UTC m=+35.568391180 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs") pod "network-metrics-daemon-chtvv" (UID: "60f88b82-c5e9-4f47-91c1-4e78498b481e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.096406 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.096460 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.096479 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.096509 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.096526 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:45Z","lastTransitionTime":"2025-12-01T02:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.102584 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" event={"ID":"f069f0aa-c376-4cd2-91bb-a5563130fabc","Type":"ContainerStarted","Data":"9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.102611 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" event={"ID":"f069f0aa-c376-4cd2-91bb-a5563130fabc","Type":"ContainerStarted","Data":"7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.115390 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.124010 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.136180 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.147746 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.158053 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.158090 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.158107 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.158155 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158193 4880 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158217 4880 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158276 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:57:01.158257875 +0000 UTC m=+50.669512267 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158319 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158335 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158346 4880 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158362 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158384 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 02:57:01.158372478 +0000 UTC m=+50.669626850 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158406 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158428 4880 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158921 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 02:57:01.158477201 +0000 UTC m=+50.669731613 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.158974 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:57:01.158949202 +0000 UTC m=+50.670203594 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.163729 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.176073 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.187464 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.198064 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.198134 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.198150 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.198171 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.198186 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:45Z","lastTransitionTime":"2025-12-01T02:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.203655 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.219308 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.241410 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.256329 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.267452 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.278176 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.287836 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.298636 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.299800 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.299927 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.299994 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.300054 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.300115 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:45Z","lastTransitionTime":"2025-12-01T02:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.308423 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.325382 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.402207 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.402250 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.402261 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.402277 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.402296 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:45Z","lastTransitionTime":"2025-12-01T02:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.506054 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.506112 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.506129 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.506153 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.506171 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:45Z","lastTransitionTime":"2025-12-01T02:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.609039 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.609096 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.609119 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.609148 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.609174 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:45Z","lastTransitionTime":"2025-12-01T02:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.711470 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.711526 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.711544 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.711571 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.711588 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:45Z","lastTransitionTime":"2025-12-01T02:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.783229 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.783339 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.783229 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.783390 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.783527 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:45 crc kubenswrapper[4880]: E1201 02:56:45.783630 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.814311 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.814363 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.814380 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.814403 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.814420 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:45Z","lastTransitionTime":"2025-12-01T02:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.916559 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.916608 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.916624 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.916645 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:45 crc kubenswrapper[4880]: I1201 02:56:45.916662 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:45Z","lastTransitionTime":"2025-12-01T02:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.019677 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.019761 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.019810 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.019831 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.019847 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.067431 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:46 crc kubenswrapper[4880]: E1201 02:56:46.067614 4880 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:46 crc kubenswrapper[4880]: E1201 02:56:46.067740 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs podName:60f88b82-c5e9-4f47-91c1-4e78498b481e nodeName:}" failed. No retries permitted until 2025-12-01 02:56:48.067700783 +0000 UTC m=+37.578955205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs") pod "network-metrics-daemon-chtvv" (UID: "60f88b82-c5e9-4f47-91c1-4e78498b481e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.148828 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.148932 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.148958 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.148993 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.149019 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.252255 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.252306 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.252325 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.252348 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.252366 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.355556 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.355680 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.355699 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.355722 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.355738 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.458719 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.458774 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.458791 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.458817 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.458835 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.561127 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.561183 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.561198 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.561217 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.561231 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.664195 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.664267 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.664286 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.664309 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.664325 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.766958 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.767020 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.767036 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.767062 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.767079 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.783801 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:46 crc kubenswrapper[4880]: E1201 02:56:46.784009 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.870183 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.870232 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.870251 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.870275 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.870292 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.974132 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.974196 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.974237 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.974269 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:46 crc kubenswrapper[4880]: I1201 02:56:46.974295 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:46Z","lastTransitionTime":"2025-12-01T02:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.077855 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.077960 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.077979 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.078003 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.078020 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:47Z","lastTransitionTime":"2025-12-01T02:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.180896 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.180956 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.180974 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.180999 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.181017 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:47Z","lastTransitionTime":"2025-12-01T02:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.284177 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.284227 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.284243 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.284268 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.284285 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:47Z","lastTransitionTime":"2025-12-01T02:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.387428 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.387482 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.387499 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.387522 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.387539 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:47Z","lastTransitionTime":"2025-12-01T02:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.489935 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.489983 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.489992 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.490008 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.490019 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:47Z","lastTransitionTime":"2025-12-01T02:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.592610 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.592640 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.592650 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.592660 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.592668 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:47Z","lastTransitionTime":"2025-12-01T02:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.695052 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.695365 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.695569 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.695721 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.695844 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:47Z","lastTransitionTime":"2025-12-01T02:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.783966 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.784004 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.784041 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:47 crc kubenswrapper[4880]: E1201 02:56:47.784173 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:47 crc kubenswrapper[4880]: E1201 02:56:47.784283 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:47 crc kubenswrapper[4880]: E1201 02:56:47.784396 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.800120 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.800176 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.800193 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.800216 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.800235 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:47Z","lastTransitionTime":"2025-12-01T02:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.903404 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.903486 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.903511 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.903544 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:47 crc kubenswrapper[4880]: I1201 02:56:47.903569 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:47Z","lastTransitionTime":"2025-12-01T02:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.007018 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.007088 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.007105 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.007131 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.007148 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.090094 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:48 crc kubenswrapper[4880]: E1201 02:56:48.090269 4880 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:48 crc kubenswrapper[4880]: E1201 02:56:48.090354 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs podName:60f88b82-c5e9-4f47-91c1-4e78498b481e nodeName:}" failed. No retries permitted until 2025-12-01 02:56:52.09032834 +0000 UTC m=+41.601582752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs") pod "network-metrics-daemon-chtvv" (UID: "60f88b82-c5e9-4f47-91c1-4e78498b481e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.110527 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.110577 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.110594 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.110618 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.110636 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.214425 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.215141 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.215189 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.215226 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.215252 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.319352 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.319414 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.319431 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.319457 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.319475 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.422594 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.422667 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.422690 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.422720 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.422741 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.526415 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.526507 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.526532 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.526557 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.526574 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.629613 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.629690 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.629773 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.629802 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.629822 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.733095 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.733177 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.733201 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.733235 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.733258 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.783910 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:48 crc kubenswrapper[4880]: E1201 02:56:48.784161 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.836289 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.836335 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.836346 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.836365 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.836377 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.938795 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.938849 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.938901 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.938927 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:48 crc kubenswrapper[4880]: I1201 02:56:48.938944 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:48Z","lastTransitionTime":"2025-12-01T02:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.041743 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.041811 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.041834 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.041864 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.041940 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.145002 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.145077 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.145099 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.145128 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.145152 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.248644 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.248706 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.248728 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.248757 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.248781 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.352646 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.352703 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.352732 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.352765 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.352790 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.455978 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.456037 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.456055 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.456079 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.456099 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.558841 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.558961 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.558982 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.559008 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.559026 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.609171 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.610773 4880 scope.go:117] "RemoveContainer" containerID="176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5" Dec 01 02:56:49 crc kubenswrapper[4880]: E1201 02:56:49.611194 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.661784 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.661840 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.661857 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.661908 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.661928 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.765083 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.765154 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.765183 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.765211 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.765230 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.783690 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.784057 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:49 crc kubenswrapper[4880]: E1201 02:56:49.784245 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.784358 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:49 crc kubenswrapper[4880]: E1201 02:56:49.784430 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:49 crc kubenswrapper[4880]: E1201 02:56:49.784587 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.868523 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.868594 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.868619 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.868651 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.868675 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.971056 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.971148 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.971166 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.971190 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:49 crc kubenswrapper[4880]: I1201 02:56:49.971208 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:49Z","lastTransitionTime":"2025-12-01T02:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.074245 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.074312 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.074336 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.074366 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.074386 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:50Z","lastTransitionTime":"2025-12-01T02:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.176906 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.176961 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.176990 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.177016 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.177033 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:50Z","lastTransitionTime":"2025-12-01T02:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.280356 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.280400 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.280416 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.280440 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.280456 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:50Z","lastTransitionTime":"2025-12-01T02:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.383692 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.383753 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.383775 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.383803 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.383823 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:50Z","lastTransitionTime":"2025-12-01T02:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.487334 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.487421 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.487440 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.487780 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.488079 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:50Z","lastTransitionTime":"2025-12-01T02:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.590604 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.590655 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.590672 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.590695 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.590714 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:50Z","lastTransitionTime":"2025-12-01T02:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.694018 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.694071 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.694087 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.694111 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.694128 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:50Z","lastTransitionTime":"2025-12-01T02:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.783123 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:50 crc kubenswrapper[4880]: E1201 02:56:50.783321 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.800066 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.800205 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.800228 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.800336 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.800368 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:50Z","lastTransitionTime":"2025-12-01T02:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.806933 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.826602 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.846764 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.879707 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.896620 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.902684 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.902737 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.902756 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.902781 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.902798 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:50Z","lastTransitionTime":"2025-12-01T02:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.913001 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.932266 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.962048 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.980489 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:50 crc kubenswrapper[4880]: I1201 02:56:50.999465 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.006135 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.006199 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.006217 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.006245 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.006263 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.022659 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.056184 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.075488 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.092210 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.109493 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.109560 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.109580 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.109607 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.109624 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.115219 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.130604 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.151990 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.213134 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.213205 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.213232 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.213261 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.213278 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.315706 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.315775 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.315794 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.315819 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.315838 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.419366 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.419441 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.419462 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.419490 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.419507 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.522605 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.522707 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.522726 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.522751 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.522769 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.626210 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.626270 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.626289 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.626314 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.626332 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.729906 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.729963 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.729979 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.730002 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.730018 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.783650 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.783690 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.783737 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:51 crc kubenswrapper[4880]: E1201 02:56:51.783858 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:51 crc kubenswrapper[4880]: E1201 02:56:51.784029 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:51 crc kubenswrapper[4880]: E1201 02:56:51.784201 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.833538 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.833600 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.833618 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.833641 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.833658 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.937241 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.937296 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.937312 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.937335 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:51 crc kubenswrapper[4880]: I1201 02:56:51.937352 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:51Z","lastTransitionTime":"2025-12-01T02:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.039837 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.039906 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.039917 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.039938 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.039953 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.139822 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:52 crc kubenswrapper[4880]: E1201 02:56:52.140045 4880 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:52 crc kubenswrapper[4880]: E1201 02:56:52.140162 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs podName:60f88b82-c5e9-4f47-91c1-4e78498b481e nodeName:}" failed. No retries permitted until 2025-12-01 02:57:00.140139693 +0000 UTC m=+49.651394075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs") pod "network-metrics-daemon-chtvv" (UID: "60f88b82-c5e9-4f47-91c1-4e78498b481e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.143377 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.143435 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.143454 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.143478 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.143497 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.246993 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.247061 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.247078 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.247104 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.247122 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.350827 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.350921 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.350946 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.350977 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.350999 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.454203 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.454271 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.454290 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.454316 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.454334 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.557405 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.557597 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.557627 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.557661 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.557684 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.660127 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.660223 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.660241 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.660265 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.660281 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.763684 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.763741 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.763758 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.763785 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.763801 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.783021 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:52 crc kubenswrapper[4880]: E1201 02:56:52.783202 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.867509 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.867572 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.867591 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.867616 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.867633 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.971189 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.971256 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.971276 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.971300 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:52 crc kubenswrapper[4880]: I1201 02:56:52.971318 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:52Z","lastTransitionTime":"2025-12-01T02:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.075146 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.075194 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.075212 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.075236 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.075254 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:53Z","lastTransitionTime":"2025-12-01T02:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.178084 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.178136 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.178155 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.178179 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.178198 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:53Z","lastTransitionTime":"2025-12-01T02:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.281136 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.281191 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.281209 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.281233 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.281251 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:53Z","lastTransitionTime":"2025-12-01T02:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.384802 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.384863 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.384921 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.384953 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.384975 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:53Z","lastTransitionTime":"2025-12-01T02:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.488518 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.488562 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.488574 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.488592 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.488607 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:53Z","lastTransitionTime":"2025-12-01T02:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.591850 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.591938 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.591957 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.591983 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.592001 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:53Z","lastTransitionTime":"2025-12-01T02:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.695176 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.695243 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.695263 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.695290 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.695309 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:53Z","lastTransitionTime":"2025-12-01T02:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.783327 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.783387 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.783330 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:53 crc kubenswrapper[4880]: E1201 02:56:53.783483 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:53 crc kubenswrapper[4880]: E1201 02:56:53.784184 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:53 crc kubenswrapper[4880]: E1201 02:56:53.784186 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.798185 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.798230 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.798247 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.798268 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.798287 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:53Z","lastTransitionTime":"2025-12-01T02:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.900912 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.900978 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.901002 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.901030 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:53 crc kubenswrapper[4880]: I1201 02:56:53.901049 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:53Z","lastTransitionTime":"2025-12-01T02:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.003318 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.003368 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.003385 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.003408 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.003424 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.106194 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.106261 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.106280 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.106332 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.106358 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.112410 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.112491 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.112510 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.112532 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.112551 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: E1201 02:56:54.136513 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:54Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.144010 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.144048 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.144059 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.144074 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.144087 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: E1201 02:56:54.161824 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:54Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.165324 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.165364 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.165373 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.165391 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.165400 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: E1201 02:56:54.178596 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:54Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.182437 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.182485 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.182503 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.182521 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.182534 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: E1201 02:56:54.196400 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:54Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.199842 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.199889 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.199901 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.199917 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.199931 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: E1201 02:56:54.224362 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:54Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:54 crc kubenswrapper[4880]: E1201 02:56:54.224489 4880 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.225841 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.225866 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.225886 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.225899 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.225909 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.329008 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.329062 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.329080 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.329104 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.329121 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.432003 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.432074 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.432100 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.432453 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.432473 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.536698 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.536757 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.536776 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.536799 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.536816 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.639945 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.640014 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.640032 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.640056 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.640075 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.742605 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.742707 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.742719 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.742736 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.742747 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.783083 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:54 crc kubenswrapper[4880]: E1201 02:56:54.783351 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.844584 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.844639 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.844658 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.844684 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.844704 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.946827 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.946913 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.946932 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.946953 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:54 crc kubenswrapper[4880]: I1201 02:56:54.946971 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:54Z","lastTransitionTime":"2025-12-01T02:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.049092 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.049155 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.049173 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.049198 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.049215 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.151547 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.151600 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.151619 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.151643 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.151659 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.254517 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.254586 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.254607 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.254635 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.254659 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.357821 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.357914 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.357932 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.357954 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.357969 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.461001 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.461051 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.461069 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.461092 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.461109 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.563591 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.563635 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.563651 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.563674 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.563690 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.666863 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.666930 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.666942 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.666983 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.666997 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.769579 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.769651 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.769670 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.769695 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.769711 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.783588 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:55 crc kubenswrapper[4880]: E1201 02:56:55.783749 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.784016 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:55 crc kubenswrapper[4880]: E1201 02:56:55.784114 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.784519 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:55 crc kubenswrapper[4880]: E1201 02:56:55.784621 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.873099 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.873142 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.873159 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.873181 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.873198 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.976072 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.976120 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.976136 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.976159 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:55 crc kubenswrapper[4880]: I1201 02:56:55.976179 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:55Z","lastTransitionTime":"2025-12-01T02:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.078915 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.079063 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.079082 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.079106 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.079128 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:56Z","lastTransitionTime":"2025-12-01T02:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.182630 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.182687 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.182707 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.182731 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.182750 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:56Z","lastTransitionTime":"2025-12-01T02:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.285859 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.285989 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.286017 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.286046 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.286068 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:56Z","lastTransitionTime":"2025-12-01T02:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.389242 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.389311 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.389333 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.389361 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.389381 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:56Z","lastTransitionTime":"2025-12-01T02:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.492689 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.492731 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.492747 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.492769 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.492785 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:56Z","lastTransitionTime":"2025-12-01T02:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.595613 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.596770 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.596974 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.597154 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.597292 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:56Z","lastTransitionTime":"2025-12-01T02:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.700777 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.700838 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.700856 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.700925 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.700944 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:56Z","lastTransitionTime":"2025-12-01T02:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.783797 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:56 crc kubenswrapper[4880]: E1201 02:56:56.784056 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.805373 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.805434 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.805456 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.805483 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.805506 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:56Z","lastTransitionTime":"2025-12-01T02:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.908774 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.908909 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.908936 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.908962 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:56 crc kubenswrapper[4880]: I1201 02:56:56.908980 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:56Z","lastTransitionTime":"2025-12-01T02:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.011286 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.011349 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.011367 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.011417 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.011434 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.114498 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.114554 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.114572 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.114598 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.114616 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.217867 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.217956 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.217973 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.218002 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.218019 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.321057 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.321175 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.321194 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.321217 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.321233 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.424195 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.424259 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.424283 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.424311 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.424332 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.527523 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.527616 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.527635 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.527660 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.527680 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.631339 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.632054 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.632090 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.632125 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.632146 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.734562 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.734621 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.734639 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.734662 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.734679 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.783083 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.783136 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.783329 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:57 crc kubenswrapper[4880]: E1201 02:56:57.783279 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:57 crc kubenswrapper[4880]: E1201 02:56:57.783464 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:57 crc kubenswrapper[4880]: E1201 02:56:57.783682 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.837668 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.837736 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.837758 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.837788 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.837811 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.941345 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.941400 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.941417 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.941440 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:57 crc kubenswrapper[4880]: I1201 02:56:57.941457 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:57Z","lastTransitionTime":"2025-12-01T02:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.044992 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.045057 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.045079 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.045106 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.045128 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.147758 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.147817 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.147834 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.147859 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.147911 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.251696 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.251757 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.251778 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.251803 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.251825 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.354942 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.355031 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.355088 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.355176 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.355197 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.458266 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.458333 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.458355 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.458384 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.458405 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.561569 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.561631 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.561649 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.561673 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.561690 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.665162 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.665246 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.665272 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.665310 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.665339 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.769287 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.769350 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.769368 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.769393 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.769411 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.783965 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:56:58 crc kubenswrapper[4880]: E1201 02:56:58.784262 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.870814 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.874593 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.874655 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.874677 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.874705 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.874727 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.880394 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.895588 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:58Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.916832 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:58Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.937636 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:58Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.956428 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:58Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.967496 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:58Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.977050 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.977122 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.977141 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.977166 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.977187 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:58Z","lastTransitionTime":"2025-12-01T02:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:58 crc kubenswrapper[4880]: I1201 02:56:58.986300 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:58Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.002449 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.013458 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.026131 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.042777 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.055681 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.067019 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.079236 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.079379 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.079442 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.079505 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.079565 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:59Z","lastTransitionTime":"2025-12-01T02:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.079756 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.099917 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.114310 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.127983 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.140674 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:56:59Z is after 2025-08-24T17:21:41Z" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.181642 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.181708 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.181731 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.181754 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.181773 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:59Z","lastTransitionTime":"2025-12-01T02:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.284648 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.284698 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.284715 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.284737 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.284754 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:59Z","lastTransitionTime":"2025-12-01T02:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.387867 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.387934 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.387951 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.387973 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.387990 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:59Z","lastTransitionTime":"2025-12-01T02:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.490578 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.490642 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.490659 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.490686 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.490706 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:59Z","lastTransitionTime":"2025-12-01T02:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.593539 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.593595 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.593619 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.593649 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.593673 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:59Z","lastTransitionTime":"2025-12-01T02:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.696223 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.696290 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.696311 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.696336 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.696355 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:59Z","lastTransitionTime":"2025-12-01T02:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.784067 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.784131 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.784252 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:56:59 crc kubenswrapper[4880]: E1201 02:56:59.784367 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:56:59 crc kubenswrapper[4880]: E1201 02:56:59.784598 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:56:59 crc kubenswrapper[4880]: E1201 02:56:59.784823 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.799532 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.799603 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.799626 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.799659 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.799683 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:59Z","lastTransitionTime":"2025-12-01T02:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.903009 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.903054 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.903066 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.903084 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:56:59 crc kubenswrapper[4880]: I1201 02:56:59.903096 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:56:59Z","lastTransitionTime":"2025-12-01T02:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.006378 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.006671 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.006700 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.006766 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.006786 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.110081 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.110151 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.110170 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.110195 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.110213 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.144294 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:00 crc kubenswrapper[4880]: E1201 02:57:00.144480 4880 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:57:00 crc kubenswrapper[4880]: E1201 02:57:00.144582 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs podName:60f88b82-c5e9-4f47-91c1-4e78498b481e nodeName:}" failed. No retries permitted until 2025-12-01 02:57:16.144551376 +0000 UTC m=+65.655805788 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs") pod "network-metrics-daemon-chtvv" (UID: "60f88b82-c5e9-4f47-91c1-4e78498b481e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.213372 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.213440 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.213462 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.213485 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.213503 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.316036 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.316081 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.316097 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.316118 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.316136 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.418309 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.418361 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.418379 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.418404 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.418421 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.522077 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.522121 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.522137 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.522161 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.522178 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.628124 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.628190 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.628208 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.628237 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.628261 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.731342 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.731409 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.731427 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.731455 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.731472 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.783188 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:00 crc kubenswrapper[4880]: E1201 02:57:00.783395 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.817336 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.840490 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.840539 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.840555 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.840577 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.840593 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.843786 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.855661 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.866154 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.878954 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.888295 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.900293 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.911251 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.927305 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.943944 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.944250 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.944548 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.944615 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.944708 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.944796 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:00Z","lastTransitionTime":"2025-12-01T02:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.957412 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.971649 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:00 crc kubenswrapper[4880]: I1201 02:57:00.984030 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:00Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.002479 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:01Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.016812 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:01Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.031973 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:01Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.047232 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.047313 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.047339 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.047371 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.047397 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.053418 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:01Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.084957 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:01Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.149949 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.150064 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.150127 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.150194 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.150250 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.157656 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.157793 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:57:33.157760256 +0000 UTC m=+82.669014668 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.252729 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.252784 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.252801 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.252826 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.252845 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.259163 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.259224 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.259261 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.259317 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.259451 4880 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.259554 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.259588 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.259607 4880 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.259690 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:57:33.259562289 +0000 UTC m=+82.770816661 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.259764 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 02:57:33.259755544 +0000 UTC m=+82.771009916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.260003 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.260050 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.260068 4880 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.260122 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 02:57:33.260101632 +0000 UTC m=+82.771356034 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.260181 4880 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.260248 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:57:33.260228435 +0000 UTC m=+82.771482837 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.355714 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.355976 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.356061 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.356147 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.356226 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.459572 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.459631 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.459654 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.459685 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.459707 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.563481 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.563840 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.564021 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.564270 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.564453 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.672988 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.673030 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.673041 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.673058 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.673070 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.775858 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.775957 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.776005 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.776030 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.776048 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.783651 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.783685 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.783814 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.783845 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.783979 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:01 crc kubenswrapper[4880]: E1201 02:57:01.784133 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.879687 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.879941 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.879964 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.879991 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.880008 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.983405 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.983462 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.983482 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.983508 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:01 crc kubenswrapper[4880]: I1201 02:57:01.983526 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:01Z","lastTransitionTime":"2025-12-01T02:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.088216 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.088316 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.088335 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.088359 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.088377 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:02Z","lastTransitionTime":"2025-12-01T02:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.190915 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.190973 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.190991 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.191014 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.191031 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:02Z","lastTransitionTime":"2025-12-01T02:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.293625 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.293682 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.293700 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.293723 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.293741 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:02Z","lastTransitionTime":"2025-12-01T02:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.397426 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.397489 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.397509 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.397534 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.397551 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:02Z","lastTransitionTime":"2025-12-01T02:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.501054 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.501113 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.501135 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.501165 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.501190 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:02Z","lastTransitionTime":"2025-12-01T02:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.604177 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.604251 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.604273 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.604297 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.604314 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:02Z","lastTransitionTime":"2025-12-01T02:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.706697 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.706773 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.706793 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.706820 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.706837 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:02Z","lastTransitionTime":"2025-12-01T02:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.784048 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:02 crc kubenswrapper[4880]: E1201 02:57:02.784315 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.785464 4880 scope.go:117] "RemoveContainer" containerID="176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.809526 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.809575 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.809599 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.809626 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.809642 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:02Z","lastTransitionTime":"2025-12-01T02:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.913717 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.914120 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.914143 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.914173 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:02 crc kubenswrapper[4880]: I1201 02:57:02.914200 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:02Z","lastTransitionTime":"2025-12-01T02:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.016980 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.017020 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.017030 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.017045 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.017055 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.120280 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.120333 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.120351 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.120376 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.120393 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.168919 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/1.log" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.172562 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.173406 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.193426 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.214234 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.222588 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.222669 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.222685 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.222707 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.222721 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.234108 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.265594 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.279421 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.297252 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.308934 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.319834 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.324685 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.324724 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.324737 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.324755 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.324767 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.329581 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.339497 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.351060 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.376891 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.389285 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.404620 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.416456 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.427208 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.427251 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.427259 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.427274 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.427284 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.429680 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.441636 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.452515 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:03Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.529637 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.529688 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.529700 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.529717 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.529728 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.632283 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.632338 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.632355 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.632374 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.632388 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.735424 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.735474 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.735491 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.735515 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.735534 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.783062 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.783113 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:03 crc kubenswrapper[4880]: E1201 02:57:03.783184 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.783069 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:03 crc kubenswrapper[4880]: E1201 02:57:03.783405 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:03 crc kubenswrapper[4880]: E1201 02:57:03.783436 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.838411 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.838470 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.838488 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.838512 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.838532 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.940967 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.941000 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.941011 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.941026 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:03 crc kubenswrapper[4880]: I1201 02:57:03.941037 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:03Z","lastTransitionTime":"2025-12-01T02:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.044410 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.044468 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.044486 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.044513 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.044534 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.148447 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.148509 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.148526 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.148551 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.148569 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.180040 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/2.log" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.181066 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/1.log" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.185453 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6" exitCode=1 Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.185502 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.185539 4880 scope.go:117] "RemoveContainer" containerID="176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.186808 4880 scope.go:117] "RemoveContainer" containerID="60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6" Dec 01 02:57:04 crc kubenswrapper[4880]: E1201 02:57:04.187081 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.215552 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.236728 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.254140 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.254196 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.254215 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.254243 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.254262 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.254561 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.275836 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.308852 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.308933 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.308953 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.308977 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.308997 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.309643 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.331007 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: E1201 02:57:04.331648 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.337124 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.337171 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.337188 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.337212 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.337230 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.348202 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: E1201 02:57:04.356976 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.363964 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.364673 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.364866 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.365311 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.365792 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.366142 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.386480 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: E1201 02:57:04.388419 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.393907 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.394177 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.394289 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.394399 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.394537 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.398593 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: E1201 02:57:04.410263 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.413425 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.413452 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.413463 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.413480 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.413492 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.420476 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: E1201 02:57:04.425038 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: E1201 02:57:04.425176 4880 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.426619 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.426751 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.426865 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.427009 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.427112 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.431846 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.441243 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.457737 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.479715 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://176fd4d6cbeb1fa048d9000e5fcd00e59aaf5a54a76b5f1c1f0b7729a4f10af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:56:41Z\\\",\\\"message\\\":\\\" 6201 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 02:56:41.000981 6201 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:56:41.001018 6201 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:56:41.001050 6201 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:56:41.001055 6201 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:56:41.001067 6201 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:56:41.001071 6201 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:56:41.001086 6201 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:56:41.001100 6201 factory.go:656] Stopping watch factory\\\\nI1201 02:56:41.001102 6201 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:56:41.001110 6201 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:56:41.001117 6201 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1201 02:56:41.001122 6201 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:56:41.001133 6201 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:56:41.001136 6201 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1201 02:56:41.001140 6201 handler.go:208] Removed *v1.NetworkPolicy\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:03Z\\\",\\\"message\\\":\\\"0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1201 02:57:03.640239 6464 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640251 6464 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-webhook LB template configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640271 6464 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.492863 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.503699 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.516992 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:04Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.530050 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.530220 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.530350 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.530508 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.530616 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.633703 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.633750 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.633768 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.633792 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.633810 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.736428 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.736479 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.736496 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.736518 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.736534 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.783784 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:04 crc kubenswrapper[4880]: E1201 02:57:04.784040 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.839577 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.839653 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.839676 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.839709 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.839730 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.942907 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.943215 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.943345 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.943519 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:04 crc kubenswrapper[4880]: I1201 02:57:04.943645 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:04Z","lastTransitionTime":"2025-12-01T02:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.047031 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.047416 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.047762 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.048143 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.048444 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.151718 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.151780 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.151799 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.151824 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.151844 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.193139 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/2.log" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.198783 4880 scope.go:117] "RemoveContainer" containerID="60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6" Dec 01 02:57:05 crc kubenswrapper[4880]: E1201 02:57:05.199082 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.221633 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.252139 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:03Z\\\",\\\"message\\\":\\\"0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1201 02:57:03.640239 6464 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640251 6464 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-webhook LB template configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640271 6464 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.255257 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.255331 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.255357 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.255392 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.255416 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.268949 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.285794 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.306806 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.328304 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.346569 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.358588 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.358638 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.358657 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.358686 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.358724 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.369562 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.389313 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.410249 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.424437 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.439203 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.450221 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.461531 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.461567 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.461578 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.461593 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.461604 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.466944 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.482036 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.497009 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.510895 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.522189 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:05Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.563976 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.564047 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.564059 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.564075 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.564088 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.667292 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.667342 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.667360 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.667385 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.667401 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.770288 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.770337 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.770355 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.770378 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.770396 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.782954 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.782999 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.783024 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:05 crc kubenswrapper[4880]: E1201 02:57:05.783164 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:05 crc kubenswrapper[4880]: E1201 02:57:05.783343 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:05 crc kubenswrapper[4880]: E1201 02:57:05.783494 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.873302 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.873382 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.873406 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.873438 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.873465 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.976965 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.977022 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.977039 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.977063 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:05 crc kubenswrapper[4880]: I1201 02:57:05.977084 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:05Z","lastTransitionTime":"2025-12-01T02:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.080774 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.080847 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.080866 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.080920 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.080943 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:06Z","lastTransitionTime":"2025-12-01T02:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.184535 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.184593 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.184609 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.184632 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.184649 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:06Z","lastTransitionTime":"2025-12-01T02:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.287531 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.287577 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.287593 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.287615 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.287632 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:06Z","lastTransitionTime":"2025-12-01T02:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.390863 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.390937 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.390953 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.390974 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.390991 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:06Z","lastTransitionTime":"2025-12-01T02:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.494014 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.494055 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.494071 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.494092 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.494108 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:06Z","lastTransitionTime":"2025-12-01T02:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.596924 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.596999 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.597024 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.597057 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.597081 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:06Z","lastTransitionTime":"2025-12-01T02:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.699797 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.699920 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.699940 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.699964 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.699984 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:06Z","lastTransitionTime":"2025-12-01T02:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.784078 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:06 crc kubenswrapper[4880]: E1201 02:57:06.784286 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.802674 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.802729 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.802746 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.802769 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.802786 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:06Z","lastTransitionTime":"2025-12-01T02:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.905047 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.905118 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.905131 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.905151 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:06 crc kubenswrapper[4880]: I1201 02:57:06.905164 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:06Z","lastTransitionTime":"2025-12-01T02:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.008737 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.008817 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.008841 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.008873 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.008934 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.112348 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.112405 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.112427 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.112451 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.112468 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.214361 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.214436 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.214464 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.214496 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.214520 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.317610 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.317674 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.317693 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.317716 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.317734 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.420791 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.420862 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.420891 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.420913 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.420925 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.524045 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.524140 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.524188 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.524213 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.524231 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.627392 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.627435 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.627448 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.627471 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.627487 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.730178 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.730245 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.730268 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.730337 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.730361 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.783618 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.783733 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.783741 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:07 crc kubenswrapper[4880]: E1201 02:57:07.783846 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:07 crc kubenswrapper[4880]: E1201 02:57:07.784122 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:07 crc kubenswrapper[4880]: E1201 02:57:07.784366 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.833529 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.833597 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.833616 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.833642 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.833659 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.936783 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.936861 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.936890 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.936973 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:07 crc kubenswrapper[4880]: I1201 02:57:07.936999 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:07Z","lastTransitionTime":"2025-12-01T02:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.040563 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.040619 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.040641 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.040671 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.040692 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.143454 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.143514 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.143536 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.143582 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.143605 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.246372 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.246433 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.246451 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.246473 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.246490 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.350160 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.350222 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.350239 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.350263 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.350279 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.453433 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.453876 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.454138 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.454307 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.454450 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.558176 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.558268 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.558286 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.558315 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.558334 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.660868 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.660965 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.660983 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.661009 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.661029 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.764187 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.764279 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.764300 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.764330 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.764358 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.783795 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:08 crc kubenswrapper[4880]: E1201 02:57:08.784219 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.868031 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.868097 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.868124 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.868152 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.868169 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.972053 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.972123 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.972141 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.972166 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:08 crc kubenswrapper[4880]: I1201 02:57:08.972191 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:08Z","lastTransitionTime":"2025-12-01T02:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.075041 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.075106 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.075127 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.075156 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.075175 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:09Z","lastTransitionTime":"2025-12-01T02:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.177785 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.177865 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.177901 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.177919 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.177931 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:09Z","lastTransitionTime":"2025-12-01T02:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.280638 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.280682 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.280692 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.280712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.280724 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:09Z","lastTransitionTime":"2025-12-01T02:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.383535 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.383599 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.383623 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.383655 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.383678 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:09Z","lastTransitionTime":"2025-12-01T02:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.487372 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.487471 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.487497 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.487525 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.487548 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:09Z","lastTransitionTime":"2025-12-01T02:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.590691 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.590754 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.590771 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.590795 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.590811 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:09Z","lastTransitionTime":"2025-12-01T02:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.694579 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.694661 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.694681 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.694706 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.694726 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:09Z","lastTransitionTime":"2025-12-01T02:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.783492 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.783533 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.783614 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:09 crc kubenswrapper[4880]: E1201 02:57:09.783815 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:09 crc kubenswrapper[4880]: E1201 02:57:09.784132 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:09 crc kubenswrapper[4880]: E1201 02:57:09.784192 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.798250 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.798312 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.798371 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.798401 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.798418 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:09Z","lastTransitionTime":"2025-12-01T02:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.901610 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.901645 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.901660 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.901684 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:09 crc kubenswrapper[4880]: I1201 02:57:09.901701 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:09Z","lastTransitionTime":"2025-12-01T02:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.005249 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.005313 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.005331 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.005355 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.005372 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.108264 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.108339 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.108365 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.108397 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.108421 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.255252 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.255336 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.255356 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.255379 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.255396 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.358040 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.358117 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.358140 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.358169 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.358197 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.460238 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.460293 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.460302 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.460316 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.460326 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.563946 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.564014 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.564032 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.564058 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.564077 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.666811 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.666915 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.666953 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.666986 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.667015 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.770010 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.770055 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.770070 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.770090 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.770105 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.783025 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:10 crc kubenswrapper[4880]: E1201 02:57:10.783226 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.802929 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.824809 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.836701 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.854401 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.870373 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.871445 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.871504 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.871518 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.871538 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.871679 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.883159 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.899494 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.915192 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.930260 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.946721 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.968295 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:03Z\\\",\\\"message\\\":\\\"0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1201 02:57:03.640239 6464 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640251 6464 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-webhook LB template configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640271 6464 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.973469 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.973498 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.973506 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.973520 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.973528 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:10Z","lastTransitionTime":"2025-12-01T02:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.984120 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:10 crc kubenswrapper[4880]: I1201 02:57:10.996387 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:10Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.024223 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:11Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.044627 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:11Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.058444 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:11Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.068981 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:11Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.075268 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.075294 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.075303 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.075318 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.075328 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:11Z","lastTransitionTime":"2025-12-01T02:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.083239 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:11Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.177368 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.177398 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.177407 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.177418 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.177426 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:11Z","lastTransitionTime":"2025-12-01T02:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.280854 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.281181 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.281368 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.281521 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.281667 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:11Z","lastTransitionTime":"2025-12-01T02:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.384036 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.384095 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.384111 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.384139 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.384156 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:11Z","lastTransitionTime":"2025-12-01T02:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.487631 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.488102 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.488274 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.488429 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.488579 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:11Z","lastTransitionTime":"2025-12-01T02:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.591492 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.591519 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.591528 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.591539 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.591547 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:11Z","lastTransitionTime":"2025-12-01T02:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.693388 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.693416 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.693424 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.693436 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.693446 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:11Z","lastTransitionTime":"2025-12-01T02:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.782872 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:11 crc kubenswrapper[4880]: E1201 02:57:11.783179 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.782970 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:11 crc kubenswrapper[4880]: E1201 02:57:11.783404 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.782948 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:11 crc kubenswrapper[4880]: E1201 02:57:11.783581 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.795464 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.795502 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.795513 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.795529 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.795541 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:11Z","lastTransitionTime":"2025-12-01T02:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.897559 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.897602 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.897614 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.897628 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:11 crc kubenswrapper[4880]: I1201 02:57:11.897642 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:11Z","lastTransitionTime":"2025-12-01T02:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:11.999969 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.000028 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.000045 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.000068 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.000085 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.103202 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.103276 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.103303 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.103332 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.103352 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.206478 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.206541 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.206564 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.206588 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.206605 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.309375 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.309428 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.309446 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.309468 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.309485 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.412407 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.412461 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.412483 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.412512 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.412532 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.515532 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.515594 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.515612 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.515641 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.515659 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.617937 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.617992 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.618010 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.618033 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.618052 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.720998 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.721039 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.721053 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.721069 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.721083 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.784118 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:12 crc kubenswrapper[4880]: E1201 02:57:12.784317 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.825370 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.825456 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.825476 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.825968 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.826006 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.929774 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.929836 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.929855 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.929907 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:12 crc kubenswrapper[4880]: I1201 02:57:12.929929 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:12Z","lastTransitionTime":"2025-12-01T02:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.033403 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.033456 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.033475 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.033500 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.033519 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.135794 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.135861 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.135905 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.135934 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.135951 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.238726 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.238787 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.238813 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.238842 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.238863 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.341501 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.341563 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.341586 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.341671 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.341698 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.443984 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.444048 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.444071 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.444099 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.444121 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.546654 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.546707 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.546728 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.546756 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.546776 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.649560 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.649627 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.649674 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.649702 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.649720 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.752645 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.752684 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.752696 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.752712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.752724 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.783050 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.783139 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.783234 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:13 crc kubenswrapper[4880]: E1201 02:57:13.783155 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:13 crc kubenswrapper[4880]: E1201 02:57:13.783334 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:13 crc kubenswrapper[4880]: E1201 02:57:13.783450 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.855796 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.855836 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.855846 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.855862 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.855893 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.958416 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.958755 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.958779 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.958806 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:13 crc kubenswrapper[4880]: I1201 02:57:13.958823 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:13Z","lastTransitionTime":"2025-12-01T02:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.060862 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.060915 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.060926 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.060941 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.060954 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.163176 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.163200 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.163208 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.163221 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.163229 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.265376 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.265431 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.265454 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.265480 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.265501 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.368181 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.368237 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.368245 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.368258 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.368268 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.438587 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.438623 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.438631 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.438644 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.438652 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: E1201 02:57:14.454998 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:14Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.458181 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.458220 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.458230 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.458245 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.458256 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: E1201 02:57:14.468308 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:14Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.471677 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.471739 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.471761 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.471789 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.471811 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: E1201 02:57:14.488785 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:14Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.492312 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.492376 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.492398 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.492433 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.492456 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: E1201 02:57:14.503781 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:14Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.506636 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.506663 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.506671 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.506683 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.506692 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: E1201 02:57:14.522011 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:14Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:14 crc kubenswrapper[4880]: E1201 02:57:14.522113 4880 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.523254 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.523272 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.523280 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.523289 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.523297 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.625047 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.625070 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.625078 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.625090 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.625100 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.727271 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.727306 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.727315 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.727330 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.727339 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.783889 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:14 crc kubenswrapper[4880]: E1201 02:57:14.784032 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.829678 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.829714 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.829724 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.829741 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.829751 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.932018 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.932087 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.932097 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.932112 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:14 crc kubenswrapper[4880]: I1201 02:57:14.932123 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:14Z","lastTransitionTime":"2025-12-01T02:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.034126 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.034157 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.034166 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.034179 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.034189 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.136169 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.136217 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.136229 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.136247 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.136259 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.238161 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.238195 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.238221 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.238235 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.238244 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.340664 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.340718 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.340735 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.340761 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.340778 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.442632 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.442689 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.442710 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.442731 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.442746 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.545070 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.545108 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.545123 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.545142 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.545158 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.647896 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.647936 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.647945 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.647963 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.647976 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.750399 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.750425 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.750434 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.750447 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.750456 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.783047 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:15 crc kubenswrapper[4880]: E1201 02:57:15.783129 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.783253 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:15 crc kubenswrapper[4880]: E1201 02:57:15.783306 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.783410 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:15 crc kubenswrapper[4880]: E1201 02:57:15.783457 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.852290 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.852374 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.852450 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.852485 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.852509 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.955485 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.955586 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.955636 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.955675 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:15 crc kubenswrapper[4880]: I1201 02:57:15.955729 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:15Z","lastTransitionTime":"2025-12-01T02:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.058085 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.058337 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.058472 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.058573 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.058661 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.161162 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.161532 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.161689 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.161859 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.162092 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.231264 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:16 crc kubenswrapper[4880]: E1201 02:57:16.231540 4880 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:57:16 crc kubenswrapper[4880]: E1201 02:57:16.231992 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs podName:60f88b82-c5e9-4f47-91c1-4e78498b481e nodeName:}" failed. No retries permitted until 2025-12-01 02:57:48.231952105 +0000 UTC m=+97.743206637 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs") pod "network-metrics-daemon-chtvv" (UID: "60f88b82-c5e9-4f47-91c1-4e78498b481e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.264368 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.264414 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.264424 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.264438 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.264446 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.367003 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.367060 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.367077 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.367101 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.367120 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.469497 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.469752 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.469872 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.469991 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.470072 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.573217 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.573253 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.573264 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.573280 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.573289 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.675469 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.675501 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.675510 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.675526 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.675536 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.777950 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.778022 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.778047 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.778134 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.778177 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.784003 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:16 crc kubenswrapper[4880]: E1201 02:57:16.784117 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.784440 4880 scope.go:117] "RemoveContainer" containerID="60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6" Dec 01 02:57:16 crc kubenswrapper[4880]: E1201 02:57:16.784935 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.879789 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.879821 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.879830 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.879845 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.879855 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.982643 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.982869 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.982987 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.983084 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:16 crc kubenswrapper[4880]: I1201 02:57:16.983164 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:16Z","lastTransitionTime":"2025-12-01T02:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.085957 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.085987 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.085996 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.086010 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.086020 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:17Z","lastTransitionTime":"2025-12-01T02:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.187465 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.187517 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.187526 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.187536 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.187544 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:17Z","lastTransitionTime":"2025-12-01T02:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.278969 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/0.log" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.279011 4880 generic.go:334] "Generic (PLEG): container finished" podID="6366d207-93fa-4b9f-ae70-0bab0b293db3" containerID="1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c" exitCode=1 Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.279043 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5znrt" event={"ID":"6366d207-93fa-4b9f-ae70-0bab0b293db3","Type":"ContainerDied","Data":"1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.279418 4880 scope.go:117] "RemoveContainer" containerID="1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.294131 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.296953 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.296995 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.297016 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.297040 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.297058 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:17Z","lastTransitionTime":"2025-12-01T02:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.309194 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.322500 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.342060 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.354715 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.369605 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.385745 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.400417 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.400483 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.400500 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.400522 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.400538 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:17Z","lastTransitionTime":"2025-12-01T02:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.402027 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.418828 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.438948 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:03Z\\\",\\\"message\\\":\\\"0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1201 02:57:03.640239 6464 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640251 6464 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-webhook LB template configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640271 6464 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.451105 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.464286 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.479731 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.503609 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.503647 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.503655 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.503670 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.503680 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:17Z","lastTransitionTime":"2025-12-01T02:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.512908 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.563552 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.580798 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.600742 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:16Z\\\",\\\"message\\\":\\\"2025-12-01T02:56:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf\\\\n2025-12-01T02:56:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf to /host/opt/cni/bin/\\\\n2025-12-01T02:56:31Z [verbose] multus-daemon started\\\\n2025-12-01T02:56:31Z [verbose] Readiness Indicator file check\\\\n2025-12-01T02:57:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.605346 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.605377 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.605386 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.605399 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.605408 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:17Z","lastTransitionTime":"2025-12-01T02:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.628813 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:17Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.707814 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.707848 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.707857 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.707888 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.707897 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:17Z","lastTransitionTime":"2025-12-01T02:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.783539 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:17 crc kubenswrapper[4880]: E1201 02:57:17.783750 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.783604 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.783570 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:17 crc kubenswrapper[4880]: E1201 02:57:17.784061 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:17 crc kubenswrapper[4880]: E1201 02:57:17.784183 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.809700 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.809738 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.809749 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.809762 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.809772 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:17Z","lastTransitionTime":"2025-12-01T02:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.912016 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.912096 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.912122 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.912155 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:17 crc kubenswrapper[4880]: I1201 02:57:17.912180 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:17Z","lastTransitionTime":"2025-12-01T02:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.014777 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.014827 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.014844 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.014866 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.014918 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.117445 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.117475 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.117485 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.117500 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.117512 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.220779 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.220833 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.220842 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.220856 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.220865 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.284558 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/0.log" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.284613 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5znrt" event={"ID":"6366d207-93fa-4b9f-ae70-0bab0b293db3","Type":"ContainerStarted","Data":"9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.305518 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:16Z\\\",\\\"message\\\":\\\"2025-12-01T02:56:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf\\\\n2025-12-01T02:56:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf to /host/opt/cni/bin/\\\\n2025-12-01T02:56:31Z [verbose] multus-daemon started\\\\n2025-12-01T02:56:31Z [verbose] Readiness Indicator file check\\\\n2025-12-01T02:57:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.323333 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.323412 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.323424 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.323437 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.323448 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.338416 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.359040 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.377613 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.394180 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.409365 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.425075 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.426274 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.426302 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.426311 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.426325 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.426334 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.445769 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.456454 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.475298 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.489077 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.508589 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:03Z\\\",\\\"message\\\":\\\"0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1201 02:57:03.640239 6464 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640251 6464 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-webhook LB template configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640271 6464 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.524753 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.528607 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.528661 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.528679 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.528703 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.528720 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.539954 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.557722 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.575361 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.588529 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.608782 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:18Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.631224 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.631440 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.631605 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.631808 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.632046 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.737042 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.737074 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.737084 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.737098 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.737108 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.783637 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:18 crc kubenswrapper[4880]: E1201 02:57:18.783758 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.839288 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.839317 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.839343 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.839356 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.839366 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.941611 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.941653 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.941665 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.941709 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:18 crc kubenswrapper[4880]: I1201 02:57:18.941729 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:18Z","lastTransitionTime":"2025-12-01T02:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.044530 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.044975 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.045229 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.045427 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.045632 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.148390 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.148421 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.148431 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.148446 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.148456 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.250502 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.250528 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.250537 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.250548 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.250557 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.352892 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.352913 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.352920 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.352930 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.352939 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.454994 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.455027 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.455035 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.455049 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.455058 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.557529 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.557555 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.557562 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.557575 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.557584 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.660147 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.660196 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.660214 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.660239 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.660257 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.763209 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.763594 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.763736 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.763910 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.764032 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.783790 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.783845 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.783845 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:19 crc kubenswrapper[4880]: E1201 02:57:19.784331 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:19 crc kubenswrapper[4880]: E1201 02:57:19.784143 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:19 crc kubenswrapper[4880]: E1201 02:57:19.784442 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.867453 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.867506 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.867523 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.867549 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.867567 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.969243 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.969276 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.969285 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.969299 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:19 crc kubenswrapper[4880]: I1201 02:57:19.969308 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:19Z","lastTransitionTime":"2025-12-01T02:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.071174 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.071206 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.071218 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.071232 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.071241 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.172977 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.173024 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.173033 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.173047 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.173059 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.279335 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.279369 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.279377 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.279391 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.279400 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.382028 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.382055 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.382063 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.382075 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.382084 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.484145 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.484189 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.484203 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.484221 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.484235 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.586991 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.587012 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.587020 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.587030 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.587038 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.689283 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.689341 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.689358 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.689382 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.689399 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.783242 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:20 crc kubenswrapper[4880]: E1201 02:57:20.783392 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.791722 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.791754 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.791762 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.791778 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.791789 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.798908 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.814846 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.832032 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.843356 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:16Z\\\",\\\"message\\\":\\\"2025-12-01T02:56:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf\\\\n2025-12-01T02:56:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf to /host/opt/cni/bin/\\\\n2025-12-01T02:56:31Z [verbose] multus-daemon started\\\\n2025-12-01T02:56:31Z [verbose] Readiness Indicator file check\\\\n2025-12-01T02:57:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.882581 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.893981 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.894005 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.894014 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.894049 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.894060 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.901378 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.917954 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.933070 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.957715 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.969753 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.983281 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.997136 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.997185 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.997201 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.997227 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.997244 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:20Z","lastTransitionTime":"2025-12-01T02:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:20 crc kubenswrapper[4880]: I1201 02:57:20.998579 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:20Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.045313 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:21Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.056947 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:21Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.084428 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:03Z\\\",\\\"message\\\":\\\"0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1201 02:57:03.640239 6464 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640251 6464 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-webhook LB template configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640271 6464 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:21Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.098142 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:21Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.099795 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.099830 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.099842 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.099857 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.100167 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:21Z","lastTransitionTime":"2025-12-01T02:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.110298 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:21Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.126821 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:21Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.202287 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.202345 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.202364 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.202388 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.202406 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:21Z","lastTransitionTime":"2025-12-01T02:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.304458 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.304490 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.304502 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.304515 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.304524 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:21Z","lastTransitionTime":"2025-12-01T02:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.407356 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.407422 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.407438 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.407463 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.407480 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:21Z","lastTransitionTime":"2025-12-01T02:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.509537 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.509578 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.509587 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.509601 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.509614 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:21Z","lastTransitionTime":"2025-12-01T02:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.612864 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.612962 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.612974 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.612992 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.613028 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:21Z","lastTransitionTime":"2025-12-01T02:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.715783 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.715818 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.715852 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.715888 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.715899 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:21Z","lastTransitionTime":"2025-12-01T02:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.783857 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.783949 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.784017 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:21 crc kubenswrapper[4880]: E1201 02:57:21.784136 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:21 crc kubenswrapper[4880]: E1201 02:57:21.784422 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:21 crc kubenswrapper[4880]: E1201 02:57:21.784300 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.819208 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.819267 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.819287 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.819313 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.819330 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:21Z","lastTransitionTime":"2025-12-01T02:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.922578 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.922635 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.922652 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.922676 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:21 crc kubenswrapper[4880]: I1201 02:57:21.922695 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:21Z","lastTransitionTime":"2025-12-01T02:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.025381 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.025416 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.025424 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.025438 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.025446 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.135246 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.135292 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.135302 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.135319 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.135332 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.238066 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.238163 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.238220 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.238244 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.238295 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.340112 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.340145 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.340156 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.340171 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.340180 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.442126 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.442152 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.442160 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.442172 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.442180 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.544121 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.544149 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.544158 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.544170 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.544179 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.646792 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.646824 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.646834 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.646849 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.646858 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.749254 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.749306 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.749316 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.749330 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.749339 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.784251 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:22 crc kubenswrapper[4880]: E1201 02:57:22.784354 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.852030 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.852324 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.852391 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.852464 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.852520 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.955596 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.955639 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.955651 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.955669 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:22 crc kubenswrapper[4880]: I1201 02:57:22.955684 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:22Z","lastTransitionTime":"2025-12-01T02:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.057855 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.058173 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.058326 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.058432 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.058523 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.160783 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.161084 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.161155 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.161229 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.161284 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.263899 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.264150 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.264217 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.264286 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.264350 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.366871 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.366920 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.366928 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.366941 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.366950 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.469750 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.469811 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.469830 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.469855 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.469901 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.573041 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.573094 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.573104 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.573118 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.573127 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.675482 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.675534 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.675542 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.675557 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.675569 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.778130 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.778158 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.778169 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.778206 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.778217 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.783124 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.783221 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.783126 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:23 crc kubenswrapper[4880]: E1201 02:57:23.783349 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:23 crc kubenswrapper[4880]: E1201 02:57:23.783445 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:23 crc kubenswrapper[4880]: E1201 02:57:23.783639 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.881380 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.881451 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.881475 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.881510 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.881534 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.984524 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.984580 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.984596 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.984620 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:23 crc kubenswrapper[4880]: I1201 02:57:23.984636 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:23Z","lastTransitionTime":"2025-12-01T02:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.086582 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.086644 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.086655 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.086670 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.086680 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.190078 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.190475 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.190695 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.190942 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.191169 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.295371 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.295761 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.296046 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.296335 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.296507 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.399570 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.399631 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.399653 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.399681 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.399698 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.502606 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.502669 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.502685 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.502712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.502730 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.606795 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.606854 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.606924 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.606951 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.606969 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.710099 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.710163 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.710174 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.710189 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.710201 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.783557 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:24 crc kubenswrapper[4880]: E1201 02:57:24.783783 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.813605 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.813703 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.813754 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.813778 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.813797 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.885842 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.885924 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.885942 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.885967 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.885983 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: E1201 02:57:24.906961 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:24Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.912862 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.912965 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.913052 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.913109 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.913130 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: E1201 02:57:24.934973 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:24Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.941486 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.941549 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.941567 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.941593 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.941616 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: E1201 02:57:24.963105 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:24Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.968827 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.968916 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.968968 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.968994 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.969011 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:24 crc kubenswrapper[4880]: E1201 02:57:24.990219 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:24Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.996421 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.996486 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.996503 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.996544 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:24 crc kubenswrapper[4880]: I1201 02:57:24.996562 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:24Z","lastTransitionTime":"2025-12-01T02:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: E1201 02:57:25.020201 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:25Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:25 crc kubenswrapper[4880]: E1201 02:57:25.020415 4880 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.022496 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.022543 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.022561 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.022587 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.022607 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.125047 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.125107 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.125125 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.125149 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.125167 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.228913 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.228980 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.228999 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.229026 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.229044 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.332709 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.332797 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.332815 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.332843 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.332862 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.435170 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.435210 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.435219 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.435237 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.435246 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.537512 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.537628 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.537650 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.537678 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.537701 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.640742 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.640801 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.640824 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.640857 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.640913 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.743818 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.743909 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.743934 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.743963 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.743984 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.783608 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.783726 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.783646 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:25 crc kubenswrapper[4880]: E1201 02:57:25.783806 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:25 crc kubenswrapper[4880]: E1201 02:57:25.783964 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:25 crc kubenswrapper[4880]: E1201 02:57:25.784117 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.846771 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.846827 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.846845 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.846896 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.846918 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.949424 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.949461 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.949472 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.949491 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:25 crc kubenswrapper[4880]: I1201 02:57:25.949504 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:25Z","lastTransitionTime":"2025-12-01T02:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.053444 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.053520 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.053540 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.053563 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.053581 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.156834 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.156926 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.156944 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.156971 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.156988 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.260833 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.260947 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.260965 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.260988 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.261006 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.364149 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.364206 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.364223 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.364248 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.364265 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.467414 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.467468 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.467488 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.467512 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.467529 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.570355 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.570421 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.570438 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.570463 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.570480 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.673537 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.673606 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.673622 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.673645 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.673662 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.776825 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.776913 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.776931 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.776954 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.776974 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.784182 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:26 crc kubenswrapper[4880]: E1201 02:57:26.784336 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.801048 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.880265 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.880317 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.880335 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.880359 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.880377 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.983343 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.983408 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.983430 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.983458 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:26 crc kubenswrapper[4880]: I1201 02:57:26.983478 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:26Z","lastTransitionTime":"2025-12-01T02:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.086032 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.086125 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.086150 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.086186 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.086211 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:27Z","lastTransitionTime":"2025-12-01T02:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.189580 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.189667 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.189692 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.189722 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.189745 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:27Z","lastTransitionTime":"2025-12-01T02:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.292527 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.292643 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.292664 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.292693 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.292711 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:27Z","lastTransitionTime":"2025-12-01T02:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.396713 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.396803 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.396819 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.396866 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.396922 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:27Z","lastTransitionTime":"2025-12-01T02:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.500210 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.500301 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.500356 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.500386 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.500450 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:27Z","lastTransitionTime":"2025-12-01T02:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.604110 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.604190 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.604262 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.604297 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.604318 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:27Z","lastTransitionTime":"2025-12-01T02:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.707668 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.707712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.707731 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.707753 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.707769 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:27Z","lastTransitionTime":"2025-12-01T02:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.782951 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.782995 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:27 crc kubenswrapper[4880]: E1201 02:57:27.783125 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:27 crc kubenswrapper[4880]: E1201 02:57:27.783230 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.783250 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:27 crc kubenswrapper[4880]: E1201 02:57:27.783472 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.810491 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.810551 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.810570 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.810595 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.810612 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:27Z","lastTransitionTime":"2025-12-01T02:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.914501 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.914581 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.914604 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.914680 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:27 crc kubenswrapper[4880]: I1201 02:57:27.914700 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:27Z","lastTransitionTime":"2025-12-01T02:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.017551 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.017629 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.017651 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.017679 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.017701 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.120900 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.120963 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.120987 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.121016 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.121039 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.224838 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.224964 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.224988 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.225013 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.225029 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.328020 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.328066 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.328082 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.328103 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.328122 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.431430 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.431518 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.431537 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.431590 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.431616 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.534441 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.534490 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.534506 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.534528 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.534544 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.637828 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.637940 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.637957 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.637979 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.637995 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.741671 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.741748 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.741772 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.741804 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.741827 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.784061 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:28 crc kubenswrapper[4880]: E1201 02:57:28.784238 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.844637 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.844689 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.844706 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.844729 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.844752 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.947314 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.947387 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.947412 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.947440 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:28 crc kubenswrapper[4880]: I1201 02:57:28.947460 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:28Z","lastTransitionTime":"2025-12-01T02:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.050910 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.050967 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.050984 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.051010 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.051030 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.154322 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.154369 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.154381 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.154404 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.154417 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.257551 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.257613 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.257631 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.257657 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.257675 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.360900 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.360955 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.360974 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.361000 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.361018 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.465032 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.465119 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.465146 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.465182 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.465206 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.567582 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.567642 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.567659 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.567684 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.567700 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.671147 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.671206 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.671223 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.671250 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.671268 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.774495 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.774552 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.774569 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.774591 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.774610 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.783370 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:29 crc kubenswrapper[4880]: E1201 02:57:29.783526 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.783781 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:29 crc kubenswrapper[4880]: E1201 02:57:29.783913 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.784112 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:29 crc kubenswrapper[4880]: E1201 02:57:29.784206 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.878145 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.878532 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.879441 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.879489 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.879507 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.982834 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.982911 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.982930 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.982952 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:29 crc kubenswrapper[4880]: I1201 02:57:29.982969 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:29Z","lastTransitionTime":"2025-12-01T02:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.085786 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.085929 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.085949 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.086012 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.086044 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:30Z","lastTransitionTime":"2025-12-01T02:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.188723 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.188769 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.188786 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.188807 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.188825 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:30Z","lastTransitionTime":"2025-12-01T02:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.291904 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.291956 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.291974 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.291998 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.292015 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:30Z","lastTransitionTime":"2025-12-01T02:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.395032 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.395392 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.395754 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.396189 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.396597 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:30Z","lastTransitionTime":"2025-12-01T02:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.500250 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.500314 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.500336 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.500364 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.500386 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:30Z","lastTransitionTime":"2025-12-01T02:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.603484 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.603553 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.603570 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.603596 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.603614 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:30Z","lastTransitionTime":"2025-12-01T02:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.706336 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.706390 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.706407 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.706430 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.706447 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:30Z","lastTransitionTime":"2025-12-01T02:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.783987 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:30 crc kubenswrapper[4880]: E1201 02:57:30.784250 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.805651 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.810375 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.811762 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.811951 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.812120 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.812276 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:30Z","lastTransitionTime":"2025-12-01T02:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.825403 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.845067 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.865075 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.897499 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:03Z\\\",\\\"message\\\":\\\"0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1201 02:57:03.640239 6464 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640251 6464 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-webhook LB template configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640271 6464 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.910842 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.914497 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.914535 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.914546 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.914563 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.914573 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:30Z","lastTransitionTime":"2025-12-01T02:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.921637 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.942377 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.956069 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.973552 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:30 crc kubenswrapper[4880]: I1201 02:57:30.987847 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:30Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.006822 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:16Z\\\",\\\"message\\\":\\\"2025-12-01T02:56:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf\\\\n2025-12-01T02:56:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf to /host/opt/cni/bin/\\\\n2025-12-01T02:56:31Z [verbose] multus-daemon started\\\\n2025-12-01T02:56:31Z [verbose] Readiness Indicator file check\\\\n2025-12-01T02:57:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.019642 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab16e335-696e-43c9-881f-f8e817dbf1ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e5c5e9737921fceb51bd0d4728c37dd15bb562f94038e1a08f6455c3b577d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.020786 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.020832 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.020851 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.020912 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.020931 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.035015 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.049811 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.061753 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.078810 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.096499 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.111115 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:31Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.123130 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.123175 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.123192 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.123214 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.123233 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.225634 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.225712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.225732 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.225759 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.225777 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.328445 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.328523 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.328536 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.328553 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.328592 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.432092 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.432156 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.432174 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.432199 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.432217 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.535166 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.535210 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.535227 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.535251 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.535269 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.638606 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.638672 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.638697 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.638730 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.638754 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.741745 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.741792 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.741810 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.741832 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.741850 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.783454 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:31 crc kubenswrapper[4880]: E1201 02:57:31.783636 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.783991 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.784399 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:31 crc kubenswrapper[4880]: E1201 02:57:31.784556 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:31 crc kubenswrapper[4880]: E1201 02:57:31.784705 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.785016 4880 scope.go:117] "RemoveContainer" containerID="60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.845286 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.845342 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.845364 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.845395 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.845419 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.948779 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.949144 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.949364 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.949522 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:31 crc kubenswrapper[4880]: I1201 02:57:31.949693 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:31Z","lastTransitionTime":"2025-12-01T02:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.057520 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.057589 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.057614 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.057646 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.057679 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.162371 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.162440 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.162457 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.162479 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.162495 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.264586 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.264630 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.264679 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.264703 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.264781 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.335152 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/2.log" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.337207 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.338055 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.352606 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.366541 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.366587 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.366601 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.366620 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.366632 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.368437 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:16Z\\\",\\\"message\\\":\\\"2025-12-01T02:56:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf\\\\n2025-12-01T02:56:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf to /host/opt/cni/bin/\\\\n2025-12-01T02:56:31Z [verbose] multus-daemon started\\\\n2025-12-01T02:56:31Z [verbose] Readiness Indicator file check\\\\n2025-12-01T02:57:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.384233 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.395894 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.405664 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.414736 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.432002 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.441099 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab16e335-696e-43c9-881f-f8e817dbf1ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e5c5e9737921fceb51bd0d4728c37dd15bb562f94038e1a08f6455c3b577d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.450911 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.461637 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.469372 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.469406 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.469419 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.469433 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.469441 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.472857 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.482150 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.492966 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.512240 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:03Z\\\",\\\"message\\\":\\\"0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1201 02:57:03.640239 6464 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640251 6464 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-webhook LB template configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640271 6464 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.522857 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.533163 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.545599 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.557818 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.567631 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:32Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.571369 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.571409 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.571423 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.571441 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.571455 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.674456 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.674519 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.674534 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.674557 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.674572 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.777630 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.777701 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.777725 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.777752 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.777777 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.783991 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:32 crc kubenswrapper[4880]: E1201 02:57:32.784149 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.880920 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.880978 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.880996 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.881019 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.881037 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.984182 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.984248 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.984271 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.984302 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:32 crc kubenswrapper[4880]: I1201 02:57:32.984326 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:32Z","lastTransitionTime":"2025-12-01T02:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.087524 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.087586 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.087603 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.087628 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.087646 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:33Z","lastTransitionTime":"2025-12-01T02:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.190404 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.190455 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.190471 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.190490 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.190506 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:33Z","lastTransitionTime":"2025-12-01T02:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.219167 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.219364 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:37.21932758 +0000 UTC m=+146.730582002 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.292974 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.293039 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.293062 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.293089 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.293112 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:33Z","lastTransitionTime":"2025-12-01T02:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.320685 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.320741 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.320802 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.320832 4880 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.320863 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.320973 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:58:37.320938703 +0000 UTC m=+146.832193125 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321028 4880 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321050 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321085 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321087 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321104 4880 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321127 4880 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321147 4880 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321107 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 02:58:37.321086267 +0000 UTC m=+146.832340669 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321191 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 02:58:37.321172859 +0000 UTC m=+146.832427271 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.321215 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 02:58:37.32120356 +0000 UTC m=+146.832457972 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.344315 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/3.log" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.345484 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/2.log" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.350385 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" exitCode=1 Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.350458 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.350518 4880 scope.go:117] "RemoveContainer" containerID="60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.351396 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.351703 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.374232 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.392581 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.402147 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.402211 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.402231 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.402257 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.402281 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:33Z","lastTransitionTime":"2025-12-01T02:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.414799 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.433622 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.451269 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.469984 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.501082 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60f2a5f574348d8e9c1c6b19fe57fb6b830b6213b9a70eeacc39c9b9991ec4e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:03Z\\\",\\\"message\\\":\\\"0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1201 02:57:03.640239 6464 services_controller.go:444] Built service openshift-machine-api/machine-api-operator-webhook LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640251 6464 services_controller.go:445] Built service openshift-machine-api/machine-api-operator-webhook LB template configs for network=default: []services.lbConfig(nil)\\\\nI1201 02:57:03.640271 6464 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:32Z\\\",\\\"message\\\":\\\" 6811 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 02:57:32.695679 6811 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:57:32.695693 6811 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:57:32.695725 6811 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:57:32.695735 6811 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:57:32.696056 6811 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:57:32.696092 6811 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:57:32.696116 6811 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:57:32.696133 6811 factory.go:656] Stopping watch factory\\\\nI1201 02:57:32.696147 6811 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:57:32.696168 6811 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:57:32.696183 6811 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:57:32.696190 6811 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 02:57:32.696198 6811 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 02:57:32.696205 6811 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 02:57:32.696212 6811 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:57:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.505698 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.505738 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.505751 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.505768 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.505796 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:33Z","lastTransitionTime":"2025-12-01T02:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.518020 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.532812 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.557768 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.574666 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.589632 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.603555 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.607891 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.607933 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.607947 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.607965 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.607977 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:33Z","lastTransitionTime":"2025-12-01T02:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.622322 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:16Z\\\",\\\"message\\\":\\\"2025-12-01T02:56:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf\\\\n2025-12-01T02:56:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf to /host/opt/cni/bin/\\\\n2025-12-01T02:56:31Z [verbose] multus-daemon started\\\\n2025-12-01T02:56:31Z [verbose] Readiness Indicator file check\\\\n2025-12-01T02:57:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.638426 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab16e335-696e-43c9-881f-f8e817dbf1ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e5c5e9737921fceb51bd0d4728c37dd15bb562f94038e1a08f6455c3b577d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.650923 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.670512 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.682033 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.705802 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:33Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.710012 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.710155 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.710251 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.710378 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.710499 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:33Z","lastTransitionTime":"2025-12-01T02:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.783566 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.783681 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.783758 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.783971 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.784682 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:33 crc kubenswrapper[4880]: E1201 02:57:33.785012 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.813416 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.813470 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.813489 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.813514 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.813531 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:33Z","lastTransitionTime":"2025-12-01T02:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.917081 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.917142 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.917166 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.917192 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:33 crc kubenswrapper[4880]: I1201 02:57:33.917210 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:33Z","lastTransitionTime":"2025-12-01T02:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.019614 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.019705 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.019724 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.019745 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.019761 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.122611 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.122910 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.123063 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.123210 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.123329 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.226643 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.226694 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.226713 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.226737 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.226754 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.330175 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.330224 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.330243 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.330269 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.330287 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.359745 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/3.log" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.366035 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 02:57:34 crc kubenswrapper[4880]: E1201 02:57:34.366426 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.385080 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.406150 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.426917 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.433463 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.433662 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.433865 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.434069 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.434204 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.447673 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.467024 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.499610 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:32Z\\\",\\\"message\\\":\\\" 6811 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 02:57:32.695679 6811 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:57:32.695693 6811 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:57:32.695725 6811 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:57:32.695735 6811 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:57:32.696056 6811 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:57:32.696092 6811 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:57:32.696116 6811 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:57:32.696133 6811 factory.go:656] Stopping watch factory\\\\nI1201 02:57:32.696147 6811 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:57:32.696168 6811 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:57:32.696183 6811 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:57:32.696190 6811 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 02:57:32.696198 6811 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 02:57:32.696205 6811 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 02:57:32.696212 6811 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:57:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.517649 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.537516 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.537757 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.537789 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.537832 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.537849 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.549065 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.574005 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.595084 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.611317 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.629227 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:16Z\\\",\\\"message\\\":\\\"2025-12-01T02:56:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf\\\\n2025-12-01T02:56:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf to /host/opt/cni/bin/\\\\n2025-12-01T02:56:31Z [verbose] multus-daemon started\\\\n2025-12-01T02:56:31Z [verbose] Readiness Indicator file check\\\\n2025-12-01T02:57:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.640735 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.640783 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.640795 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.640812 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.640826 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.643604 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab16e335-696e-43c9-881f-f8e817dbf1ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e5c5e9737921fceb51bd0d4728c37dd15bb562f94038e1a08f6455c3b577d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.660270 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.677174 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.691669 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.711834 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.731229 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.743105 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.743135 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.743145 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.743164 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.743177 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.744267 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:34Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.784007 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:34 crc kubenswrapper[4880]: E1201 02:57:34.784233 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.846503 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.846553 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.846565 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.846582 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.846593 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.949538 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.949584 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.949598 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.949618 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:34 crc kubenswrapper[4880]: I1201 02:57:34.949634 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:34Z","lastTransitionTime":"2025-12-01T02:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.051984 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.052034 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.052053 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.052081 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.052099 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.155017 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.155051 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.155061 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.155078 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.155089 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.188222 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.188269 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.188289 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.188313 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.188332 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: E1201 02:57:35.222110 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.227517 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.227584 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.227600 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.227630 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.227648 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: E1201 02:57:35.248823 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.255533 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.255607 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.255635 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.255668 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.255788 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: E1201 02:57:35.272281 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.277018 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.277053 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.277062 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.277078 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.277089 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: E1201 02:57:35.296196 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.300447 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.300493 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.300511 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.300537 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.300554 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: E1201 02:57:35.320664 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:35Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:35 crc kubenswrapper[4880]: E1201 02:57:35.320937 4880 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.323039 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.323085 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.323100 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.323139 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.323155 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.425390 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.425423 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.425436 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.425453 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.425466 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.528775 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.528832 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.528855 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.528907 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.528930 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.631611 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.631660 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.631677 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.631702 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.631723 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.735024 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.735073 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.735092 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.735116 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.735135 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.783341 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:35 crc kubenswrapper[4880]: E1201 02:57:35.783516 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.783777 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:35 crc kubenswrapper[4880]: E1201 02:57:35.783911 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.785150 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:35 crc kubenswrapper[4880]: E1201 02:57:35.785350 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.837736 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.837807 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.837832 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.837866 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.837921 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.940676 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.940725 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.940745 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.940769 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:35 crc kubenswrapper[4880]: I1201 02:57:35.940786 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:35Z","lastTransitionTime":"2025-12-01T02:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.049025 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.049093 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.049132 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.049167 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.049194 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:36Z","lastTransitionTime":"2025-12-01T02:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.151716 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.151781 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.151801 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.151827 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.151846 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:36Z","lastTransitionTime":"2025-12-01T02:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.255023 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.255344 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.255468 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.255649 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.255829 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:36Z","lastTransitionTime":"2025-12-01T02:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.391240 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.391284 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.391296 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.391313 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.391324 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:36Z","lastTransitionTime":"2025-12-01T02:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.494079 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.494163 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.494190 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.494223 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.494245 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:36Z","lastTransitionTime":"2025-12-01T02:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.598149 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.598241 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.598260 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.598285 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.598302 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:36Z","lastTransitionTime":"2025-12-01T02:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.700492 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.700791 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.700958 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.701106 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.701228 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:36Z","lastTransitionTime":"2025-12-01T02:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.783429 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:36 crc kubenswrapper[4880]: E1201 02:57:36.784222 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.804770 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.804839 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.804862 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.804923 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.804949 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:36Z","lastTransitionTime":"2025-12-01T02:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.908280 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.908667 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.908865 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.909112 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:36 crc kubenswrapper[4880]: I1201 02:57:36.909351 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:36Z","lastTransitionTime":"2025-12-01T02:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.012567 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.012608 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.012620 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.012638 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.012649 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.116398 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.116463 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.116482 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.116506 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.116524 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.219717 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.219777 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.219794 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.219821 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.219841 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.323453 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.323857 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.324059 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.324274 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.324403 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.427109 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.427354 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.427444 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.427566 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.427646 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.530158 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.530218 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.530230 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.530249 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.530266 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.634065 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.634124 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.634143 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.634168 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.634185 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.737637 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.737694 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.737711 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.737733 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.737752 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.783938 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.783969 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.784003 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:37 crc kubenswrapper[4880]: E1201 02:57:37.784634 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:37 crc kubenswrapper[4880]: E1201 02:57:37.784735 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:37 crc kubenswrapper[4880]: E1201 02:57:37.785233 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.840749 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.840818 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.840836 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.840861 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.840904 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.944177 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.944247 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.944270 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.944303 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:37 crc kubenswrapper[4880]: I1201 02:57:37.944326 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:37Z","lastTransitionTime":"2025-12-01T02:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.047244 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.047322 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.047347 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.047378 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.047400 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.150688 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.150759 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.150782 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.150815 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.150837 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.253201 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.253296 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.253351 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.253380 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.253401 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.356184 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.356214 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.356223 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.356235 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.356244 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.459836 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.459995 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.460017 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.460042 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.460060 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.562392 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.562446 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.562463 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.562493 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.562544 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.672158 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.672219 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.672237 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.672264 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.672281 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.775765 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.775824 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.775849 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.775884 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.775988 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.783187 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:38 crc kubenswrapper[4880]: E1201 02:57:38.783355 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.879224 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.880052 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.880090 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.880119 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.880137 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.983320 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.983374 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.983392 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.983416 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:38 crc kubenswrapper[4880]: I1201 02:57:38.983435 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:38Z","lastTransitionTime":"2025-12-01T02:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.086764 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.086823 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.086845 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.086924 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.086950 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:39Z","lastTransitionTime":"2025-12-01T02:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.190323 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.190393 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.190414 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.190442 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.190462 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:39Z","lastTransitionTime":"2025-12-01T02:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.294255 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.294353 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.294379 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.294408 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.294429 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:39Z","lastTransitionTime":"2025-12-01T02:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.397114 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.397175 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.397192 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.397220 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.397238 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:39Z","lastTransitionTime":"2025-12-01T02:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.500980 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.501046 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.501063 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.501090 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.501107 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:39Z","lastTransitionTime":"2025-12-01T02:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.603648 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.603690 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.603701 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.603718 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.603729 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:39Z","lastTransitionTime":"2025-12-01T02:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.706245 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.706340 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.706359 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.706383 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.706400 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:39Z","lastTransitionTime":"2025-12-01T02:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.783853 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.783949 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.783853 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:39 crc kubenswrapper[4880]: E1201 02:57:39.784057 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:39 crc kubenswrapper[4880]: E1201 02:57:39.784221 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:39 crc kubenswrapper[4880]: E1201 02:57:39.784418 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.809750 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.809803 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.809825 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.809855 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.809917 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:39Z","lastTransitionTime":"2025-12-01T02:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.912802 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.912863 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.912919 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.912947 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:39 crc kubenswrapper[4880]: I1201 02:57:39.912967 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:39Z","lastTransitionTime":"2025-12-01T02:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.016274 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.016338 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.016362 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.016394 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.016420 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.119728 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.119778 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.119797 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.119822 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.119840 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.223445 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.223513 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.223534 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.223557 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.223574 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.326209 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.326275 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.326292 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.326318 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.326336 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.429105 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.429157 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.429176 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.429200 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.429218 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.531416 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.531464 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.531474 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.531490 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.531500 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.634214 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.634271 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.634288 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.634311 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.634330 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.736961 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.737033 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.737057 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.737084 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.737105 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.783951 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:40 crc kubenswrapper[4880]: E1201 02:57:40.785323 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.805446 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.822258 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.840532 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.840576 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.840593 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.840616 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.840850 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.843679 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.862347 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.879805 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.901537 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.915313 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.926495 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.943515 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.943534 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.943544 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.943559 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.943568 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:40Z","lastTransitionTime":"2025-12-01T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.946624 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:32Z\\\",\\\"message\\\":\\\" 6811 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 02:57:32.695679 6811 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:57:32.695693 6811 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:57:32.695725 6811 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:57:32.695735 6811 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:57:32.696056 6811 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:57:32.696092 6811 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:57:32.696116 6811 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:57:32.696133 6811 factory.go:656] Stopping watch factory\\\\nI1201 02:57:32.696147 6811 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:57:32.696168 6811 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:57:32.696183 6811 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:57:32.696190 6811 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 02:57:32.696198 6811 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 02:57:32.696205 6811 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 02:57:32.696212 6811 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:57:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.972866 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.985473 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:40 crc kubenswrapper[4880]: I1201 02:57:40.997741 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:40Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.011542 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.026008 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:16Z\\\",\\\"message\\\":\\\"2025-12-01T02:56:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf\\\\n2025-12-01T02:56:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf to /host/opt/cni/bin/\\\\n2025-12-01T02:56:31Z [verbose] multus-daemon started\\\\n2025-12-01T02:56:31Z [verbose] Readiness Indicator file check\\\\n2025-12-01T02:57:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.035233 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab16e335-696e-43c9-881f-f8e817dbf1ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e5c5e9737921fceb51bd0d4728c37dd15bb562f94038e1a08f6455c3b577d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.044834 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.046651 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.046712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.046735 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.046765 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.046789 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.059356 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.071354 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.087844 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:41Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.152527 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.152586 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.152598 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.152617 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.152630 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.254853 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.254937 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.254955 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.254980 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.254998 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.357220 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.357252 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.357263 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.357279 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.357290 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.459452 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.459487 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.459500 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.459515 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.459526 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.561607 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.562035 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.562195 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.562342 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.562472 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.665798 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.665910 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.665928 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.665952 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.665969 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.768492 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.768566 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.768585 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.768608 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.768626 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.783422 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.783433 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:41 crc kubenswrapper[4880]: E1201 02:57:41.783612 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.783459 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:41 crc kubenswrapper[4880]: E1201 02:57:41.783763 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:41 crc kubenswrapper[4880]: E1201 02:57:41.783841 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.871275 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.871309 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.871316 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.871331 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.871341 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.973951 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.974005 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.974022 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.974051 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:41 crc kubenswrapper[4880]: I1201 02:57:41.974070 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:41Z","lastTransitionTime":"2025-12-01T02:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.076418 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.076594 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.076617 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.076683 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.076706 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:42Z","lastTransitionTime":"2025-12-01T02:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.180934 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.180996 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.181019 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.181048 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.181068 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:42Z","lastTransitionTime":"2025-12-01T02:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.284219 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.284289 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.284306 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.284330 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.284347 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:42Z","lastTransitionTime":"2025-12-01T02:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.393025 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.393403 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.393582 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.393732 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.393952 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:42Z","lastTransitionTime":"2025-12-01T02:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.498354 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.498427 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.498450 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.498481 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.498503 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:42Z","lastTransitionTime":"2025-12-01T02:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.602063 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.602122 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.602145 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.602174 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.602195 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:42Z","lastTransitionTime":"2025-12-01T02:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.706022 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.706108 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.706126 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.706185 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.706205 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:42Z","lastTransitionTime":"2025-12-01T02:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.783677 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:42 crc kubenswrapper[4880]: E1201 02:57:42.783958 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.809256 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.809313 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.809330 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.809356 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.809374 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:42Z","lastTransitionTime":"2025-12-01T02:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.912694 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.912769 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.912788 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.912816 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:42 crc kubenswrapper[4880]: I1201 02:57:42.912838 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:42Z","lastTransitionTime":"2025-12-01T02:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.015803 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.015958 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.015980 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.016009 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.016029 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.118934 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.118984 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.119002 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.119025 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.119042 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.222341 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.222422 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.222442 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.222473 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.222492 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.326439 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.326607 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.326760 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.326788 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.326807 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.430168 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.430291 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.430316 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.430345 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.430368 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.533210 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.533261 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.533280 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.533305 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.533322 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.636291 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.636361 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.636387 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.636417 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.636440 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.739261 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.739403 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.739430 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.739465 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.739494 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.783574 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.783655 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.783579 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:43 crc kubenswrapper[4880]: E1201 02:57:43.783754 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:43 crc kubenswrapper[4880]: E1201 02:57:43.783956 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:43 crc kubenswrapper[4880]: E1201 02:57:43.784210 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.842508 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.842568 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.842586 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.842610 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.842628 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.946113 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.946170 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.946188 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.946217 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:43 crc kubenswrapper[4880]: I1201 02:57:43.946236 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:43Z","lastTransitionTime":"2025-12-01T02:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.049500 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.049544 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.049555 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.049574 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.049586 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.152748 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.152813 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.152838 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.152908 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.152957 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.255737 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.255796 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.255813 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.255839 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.255856 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.359330 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.359387 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.359404 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.359429 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.359448 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.462542 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.462598 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.462614 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.462641 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.462661 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.565490 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.565549 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.565566 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.565592 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.565610 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.668614 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.668688 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.668712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.668743 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.668772 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.771602 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.771666 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.771683 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.771710 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.771728 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.782970 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:44 crc kubenswrapper[4880]: E1201 02:57:44.783199 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.875207 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.875270 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.875287 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.875312 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.875329 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.979224 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.979383 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.979411 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.979443 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:44 crc kubenswrapper[4880]: I1201 02:57:44.979520 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:44Z","lastTransitionTime":"2025-12-01T02:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.084249 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.084298 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.084310 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.084328 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.084340 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.186593 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.186662 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.186683 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.186712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.186733 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.290414 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.290475 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.290499 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.290528 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.290550 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.394049 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.394140 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.394158 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.394182 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.394199 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.395599 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.395643 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.395660 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.395680 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.395694 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: E1201 02:57:45.413518 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.418344 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.418417 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.418434 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.418457 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.418475 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: E1201 02:57:45.436229 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.440955 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.441001 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.441018 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.441041 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.441061 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: E1201 02:57:45.457740 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.463319 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.463360 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.463376 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.463396 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.463413 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: E1201 02:57:45.480308 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.485593 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.485644 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.485660 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.485681 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.485697 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: E1201 02:57:45.504574 4880 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1be2706e-f8d0-4d95-b2c7-cbb60ac451ce\\\",\\\"systemUUID\\\":\\\"083280f8-ec38-4b4c-9ae8-83321ce8fce0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:45Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:45 crc kubenswrapper[4880]: E1201 02:57:45.504689 4880 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.506850 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.506897 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.506906 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.506920 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.506931 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.610028 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.610087 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.610106 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.610133 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.610150 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.712752 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.712794 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.712809 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.712829 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.712848 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.783840 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.783934 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.783988 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:45 crc kubenswrapper[4880]: E1201 02:57:45.784094 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:45 crc kubenswrapper[4880]: E1201 02:57:45.784202 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:45 crc kubenswrapper[4880]: E1201 02:57:45.784327 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.815952 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.816017 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.816035 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.816059 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.816079 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.918897 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.918921 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.918929 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.918943 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:45 crc kubenswrapper[4880]: I1201 02:57:45.918952 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:45Z","lastTransitionTime":"2025-12-01T02:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.021731 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.021899 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.021920 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.021945 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.021963 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.126374 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.126444 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.126456 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.126497 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.126510 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.229710 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.229772 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.229789 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.229814 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.229832 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.332799 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.332843 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.332858 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.332910 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.332928 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.436238 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.436299 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.436318 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.436342 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.436362 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.539511 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.539568 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.539590 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.539617 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.539635 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.642114 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.642178 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.642196 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.642225 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.642247 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.745786 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.746008 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.746034 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.746059 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.746076 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.784037 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:46 crc kubenswrapper[4880]: E1201 02:57:46.784449 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.849096 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.849159 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.849177 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.849201 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.849218 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.951151 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.951482 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.951630 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.951782 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:46 crc kubenswrapper[4880]: I1201 02:57:46.951968 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:46Z","lastTransitionTime":"2025-12-01T02:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.055290 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.055323 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.055334 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.055372 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.055386 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.157865 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.157937 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.157950 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.157972 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.157991 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.261410 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.261495 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.261520 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.261555 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.261577 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.365317 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.365379 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.365396 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.365423 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.365440 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.468036 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.468127 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.468145 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.468172 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.468190 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.571406 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.571460 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.571477 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.571503 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.571520 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.674769 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.674832 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.674849 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.674900 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.674920 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.778655 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.778720 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.778739 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.778765 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.778783 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.783308 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.783390 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.783657 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:47 crc kubenswrapper[4880]: E1201 02:57:47.783925 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:47 crc kubenswrapper[4880]: E1201 02:57:47.784203 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:47 crc kubenswrapper[4880]: E1201 02:57:47.784325 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.882355 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.882413 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.882432 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.882460 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.882477 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.986259 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.986327 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.986345 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.986370 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:47 crc kubenswrapper[4880]: I1201 02:57:47.986389 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:47Z","lastTransitionTime":"2025-12-01T02:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.089428 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.089490 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.089508 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.089533 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.089550 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:48Z","lastTransitionTime":"2025-12-01T02:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.192547 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.192604 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.192620 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.192645 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.192663 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:48Z","lastTransitionTime":"2025-12-01T02:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.296181 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.296256 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.296275 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.296300 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.296318 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:48Z","lastTransitionTime":"2025-12-01T02:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.324026 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:48 crc kubenswrapper[4880]: E1201 02:57:48.324225 4880 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:57:48 crc kubenswrapper[4880]: E1201 02:57:48.324305 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs podName:60f88b82-c5e9-4f47-91c1-4e78498b481e nodeName:}" failed. No retries permitted until 2025-12-01 02:58:52.324281846 +0000 UTC m=+161.835536258 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs") pod "network-metrics-daemon-chtvv" (UID: "60f88b82-c5e9-4f47-91c1-4e78498b481e") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.399347 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.399404 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.399423 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.399446 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.399465 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:48Z","lastTransitionTime":"2025-12-01T02:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.502057 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.502119 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.502137 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.502160 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.502177 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:48Z","lastTransitionTime":"2025-12-01T02:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.605950 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.606004 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.606020 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.606042 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.606060 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:48Z","lastTransitionTime":"2025-12-01T02:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.709507 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.709569 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.709586 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.709611 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.709630 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:48Z","lastTransitionTime":"2025-12-01T02:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.783074 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:48 crc kubenswrapper[4880]: E1201 02:57:48.783281 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.784279 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 02:57:48 crc kubenswrapper[4880]: E1201 02:57:48.784524 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.812943 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.813016 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.813035 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.813061 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.813097 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:48Z","lastTransitionTime":"2025-12-01T02:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.916323 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.916375 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.916394 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.916419 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:48 crc kubenswrapper[4880]: I1201 02:57:48.916436 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:48Z","lastTransitionTime":"2025-12-01T02:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.019932 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.019989 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.020005 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.020031 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.020053 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.122712 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.122762 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.122779 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.122804 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.122821 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.228667 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.228721 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.228739 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.228761 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.228778 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.332444 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.332747 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.332981 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.333199 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.333382 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.437027 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.437106 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.437124 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.437155 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.437175 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.540244 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.540336 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.540357 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.540418 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.540438 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.643501 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.643552 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.643569 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.643595 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.643612 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.747174 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.747230 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.747247 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.747272 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.747288 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.783416 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.783514 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.783653 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:49 crc kubenswrapper[4880]: E1201 02:57:49.783637 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:49 crc kubenswrapper[4880]: E1201 02:57:49.783838 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:49 crc kubenswrapper[4880]: E1201 02:57:49.784095 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.849443 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.849508 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.849527 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.849553 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.849570 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.952290 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.952656 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.952804 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.952984 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:49 crc kubenswrapper[4880]: I1201 02:57:49.953135 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:49Z","lastTransitionTime":"2025-12-01T02:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.056431 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.056493 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.056513 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.056540 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.056561 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.159080 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.159140 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.159160 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.159184 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.159204 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.262463 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.262524 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.262547 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.262577 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.262600 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.366102 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.366140 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.366150 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.366163 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.366172 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.468036 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.468108 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.468133 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.468164 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.468182 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.570977 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.571029 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.571045 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.571069 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.571087 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.674338 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.674670 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.674811 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.675022 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.675185 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.778132 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.778209 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.778237 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.778268 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.778290 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.783126 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:50 crc kubenswrapper[4880]: E1201 02:57:50.783438 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.804309 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 02:56:23.335645 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 02:56:23.337436 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2099365531/tls.crt::/tmp/serving-cert-2099365531/tls.key\\\\\\\"\\\\nI1201 02:56:29.214603 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 02:56:29.219576 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 02:56:29.219621 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 02:56:29.219674 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 02:56:29.219700 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 02:56:29.238436 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1201 02:56:29.238454 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1201 02:56:29.238474 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238507 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 02:56:29.238520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 02:56:29.238527 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 02:56:29.238534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 02:56:29.238540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1201 02:56:29.242111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.822825 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a3c0e1f4af3ae37b9fbc4c0f57717e848e56368f4a422c8169057e69b531354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a2d631794ae73a42a7146d2ca8d0da0c2de36db98a4f4239b8646bcd2029750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.838587 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057ec9cf-8406-4617-bda6-99517f6d2a41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f7c9993b35ded27e0a0c503b3f37b7a1c4af1623c42577025e7862263b9181\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m9q9z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-g45lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.858718 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5znrt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6366d207-93fa-4b9f-ae70-0bab0b293db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:57:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:16Z\\\",\\\"message\\\":\\\"2025-12-01T02:56:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf\\\\n2025-12-01T02:56:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_caa492a9-7e27-431f-bed9-a4b2927a77cf to /host/opt/cni/bin/\\\\n2025-12-01T02:56:31Z [verbose] multus-daemon started\\\\n2025-12-01T02:56:31Z [verbose] Readiness Indicator file check\\\\n2025-12-01T02:57:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:57:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfpj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5znrt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.881401 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.881448 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.881466 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.881492 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.881509 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.893158 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f738a53-e954-40d6-a818-c3a8f145d75d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://906fc525d454585f9e8d652c764a07ef77f9cb6d24394b3c2b8097711baa7879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67002a2cdfe380a56643f087a50e3e5b6dbf796372ae6defa97952af9fbff981\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2042f61ef9f736af65251e8c023ea8c39b7ab53e1d6df3c774153d9d183c742c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c401f14d709c644738e53f178e68176f5354596aee9bab5100ebd1609717c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53082d0cf65fec378101b0dc46402c5c08b1c69374815b67a964dbd0f8d22f96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53e69a6171dae5963383eba610dec0a5c3a677293e4ff4f8829f246dc4e6936d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4812898731a607f28c598f0987599900dd92e5b53838c318a10601c7c789b07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df2599570c410de4e87ec9cc50b0c429a6e376aa5e6e9b05e3eb9881db5840b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.912198 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14894f63-337b-4401-9ff6-d3d6a849a0cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b62dacf6aae89c916ac340f919b6129a2ce5f91ed65d503d8bd153e72df4295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20896a3511371b777c10f4221d7e2b1ce49ff0e210f0b12791717ae91f9c5d9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a03b7a8c2db90154d8e4bfc5de5231b89a13120844879bdf667927119a71364a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd4babbab04118c6724488d800e6bdef88a82834914613620aa019d882426d13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.930397 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.945380 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9899k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4370efb6-7bd1-4363-9c25-4db445e54a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa68945c15bc8f2010d2126bb9f8c03add2e29a3e5f1efe0979029167deb21aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f54b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9899k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.967125 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2a76648-c405-40a9-a0d4-3604ff888d39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad57b3264730c46f196e872ee3ba92e28aa0d1af120c8d76db785b55038e4896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e9c0a8efa1eec9b03619d3d1fb928a9f32e57965a37e4dd70f5b371155c232\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c7cfdf8015c6822b68f50c41dc9910303890d7dca56652c74d52c5a13528968\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9791c31f801e940cf933b3839d3e5dcc6d91c5c4f5f68200cc73cedf0a5bc606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b540ccad59261a7a933eb290fc63fb133aba0abee3a2008f37b0037865ed4a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02ae8b5eff9daef7dc95b4ecf38313b3852455ed84ffa626e7df1116948b6f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309defd4c2967c247d20628fc141de3508f32d9dc701b1beb1db7859902ffc15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p4pc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w7bw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.983491 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab16e335-696e-43c9-881f-f8e817dbf1ec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6e5c5e9737921fceb51bd0d4728c37dd15bb562f94038e1a08f6455c3b577d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc4f46e3f27b7a021083eadcb62834eac342e3d177b4843937c7ba3abe5f4c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.984331 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.984387 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.984405 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.984430 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:50 crc kubenswrapper[4880]: I1201 02:57:50.984447 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:50Z","lastTransitionTime":"2025-12-01T02:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.000508 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sqgcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbed5d89-6221-4f4b-af2d-55e677d62027\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48ccaf0df6910efb926a1a817f16addcaab0746bee38180bfc8177514568b029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8ggpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sqgcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:50Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.020987 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d72fe72f2cd53a6094941a55941009796a41a5d4060e03fe2468f0d38e44c551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.039558 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.058092 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b039e51c3ff66587304556eb0db61d682c8667beb0597242536b23a09eca70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.077833 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.088242 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.088329 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.088351 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.088379 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.088403 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:51Z","lastTransitionTime":"2025-12-01T02:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.112202 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T02:57:32Z\\\",\\\"message\\\":\\\" 6811 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 02:57:32.695679 6811 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 02:57:32.695693 6811 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 02:57:32.695725 6811 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 02:57:32.695735 6811 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 02:57:32.696056 6811 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 02:57:32.696092 6811 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 02:57:32.696116 6811 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1201 02:57:32.696133 6811 factory.go:656] Stopping watch factory\\\\nI1201 02:57:32.696147 6811 ovnkube.go:599] Stopped ovnkube\\\\nI1201 02:57:32.696168 6811 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 02:57:32.696183 6811 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 02:57:32.696190 6811 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 02:57:32.696198 6811 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 02:57:32.696205 6811 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 02:57:32.696212 6811 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 02:57:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T02:57:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T02:56:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T02:56:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvnjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-52bx6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.129229 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069f0aa-c376-4cd2-91bb-a5563130fabc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dd8578e5b703f062fffe3beae83a0fef0edbcc72509375bf871222440cee40a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bc733bddb98d9945e726a16bc31b01ab6d942e872a430c7f98d3e9b0a23beb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvdkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-ctqw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.146601 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-chtvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60f88b82-c5e9-4f47-91c1-4e78498b481e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-925xk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-chtvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.166224 4880 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"138baa03-9c2c-42bb-bc79-15da7f001467\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T02:56:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4846316a1b0c0fcdb416a8b809ccb5ca004a8940d16368953c35984ee84f3e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5a5fa2e31959b66209b72ea3851480b1ffe24cdfc4b739b1310c9192594616\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1666aa65187cc5875587f999c2974ac2ceccfb2f205c4ac5e53253e8f613e6a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T02:56:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T02:56:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T02:57:51Z is after 2025-08-24T17:21:41Z" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.191953 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.191998 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.192014 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.192036 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.192053 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:51Z","lastTransitionTime":"2025-12-01T02:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.295152 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.295200 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.295219 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.295241 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.295259 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:51Z","lastTransitionTime":"2025-12-01T02:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.397822 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.397915 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.397941 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.397971 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.397993 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:51Z","lastTransitionTime":"2025-12-01T02:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.501212 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.501290 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.501308 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.501330 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.501348 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:51Z","lastTransitionTime":"2025-12-01T02:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.604815 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.604890 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.604906 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.604931 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.604948 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:51Z","lastTransitionTime":"2025-12-01T02:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.707578 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.707647 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.707666 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.707688 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.707706 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:51Z","lastTransitionTime":"2025-12-01T02:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.784057 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.784079 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:51 crc kubenswrapper[4880]: E1201 02:57:51.784243 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.784347 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:51 crc kubenswrapper[4880]: E1201 02:57:51.784460 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:51 crc kubenswrapper[4880]: E1201 02:57:51.784658 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.810378 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.810453 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.810500 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.810522 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.810538 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:51Z","lastTransitionTime":"2025-12-01T02:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.913577 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.913635 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.913653 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.913680 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:51 crc kubenswrapper[4880]: I1201 02:57:51.913699 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:51Z","lastTransitionTime":"2025-12-01T02:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.016341 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.016414 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.016438 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.016460 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.016477 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.120413 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.120477 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.120495 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.120528 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.120556 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.223954 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.224024 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.224042 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.224069 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.224089 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.326640 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.326719 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.326745 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.326777 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.326800 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.430052 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.430129 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.430165 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.430193 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.430212 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.533466 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.533507 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.533517 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.533529 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.533537 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.636350 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.636405 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.636417 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.636434 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.636446 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.739562 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.739601 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.739613 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.739631 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.739643 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.783846 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:52 crc kubenswrapper[4880]: E1201 02:57:52.784207 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.842328 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.842410 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.842431 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.842453 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.842469 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.945176 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.945265 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.945286 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.945311 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:52 crc kubenswrapper[4880]: I1201 02:57:52.945329 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:52Z","lastTransitionTime":"2025-12-01T02:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.053563 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.054223 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.054376 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.054509 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.054635 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.158067 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.158129 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.158146 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.158169 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.158186 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.261397 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.261455 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.261472 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.261497 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.261514 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.364768 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.365642 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.365802 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.366033 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.366208 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.469081 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.469144 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.469161 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.469185 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.469206 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.571784 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.571840 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.571856 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.571947 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.571969 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.675283 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.675357 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.675374 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.675399 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.675416 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.778026 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.778678 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.778846 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.779261 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.779427 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.783018 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.783108 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.783138 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:53 crc kubenswrapper[4880]: E1201 02:57:53.783484 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:53 crc kubenswrapper[4880]: E1201 02:57:53.783644 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:53 crc kubenswrapper[4880]: E1201 02:57:53.783754 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.882324 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.882355 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.882367 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.882382 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.882392 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.985720 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.985790 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.985808 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.985830 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:53 crc kubenswrapper[4880]: I1201 02:57:53.985846 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:53Z","lastTransitionTime":"2025-12-01T02:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.089166 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.089212 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.089228 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.089253 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.089269 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:54Z","lastTransitionTime":"2025-12-01T02:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.192580 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.192625 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.192641 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.192662 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.192679 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:54Z","lastTransitionTime":"2025-12-01T02:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.295022 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.295110 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.295129 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.295199 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.295228 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:54Z","lastTransitionTime":"2025-12-01T02:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.397760 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.397831 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.397852 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.397916 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.397944 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:54Z","lastTransitionTime":"2025-12-01T02:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.500298 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.500372 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.500398 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.500427 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.500449 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:54Z","lastTransitionTime":"2025-12-01T02:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.603753 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.603798 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.603813 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.603835 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.603856 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:54Z","lastTransitionTime":"2025-12-01T02:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.706057 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.706099 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.706114 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.706141 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.706157 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:54Z","lastTransitionTime":"2025-12-01T02:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.783965 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:54 crc kubenswrapper[4880]: E1201 02:57:54.784179 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.809195 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.809261 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.809283 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.809306 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.809323 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:54Z","lastTransitionTime":"2025-12-01T02:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.912158 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.912234 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.912256 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.912286 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:54 crc kubenswrapper[4880]: I1201 02:57:54.912309 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:54Z","lastTransitionTime":"2025-12-01T02:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.014765 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.014842 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.014864 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.014922 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.014945 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:55Z","lastTransitionTime":"2025-12-01T02:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.117856 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.117925 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.117943 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.117966 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.117983 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:55Z","lastTransitionTime":"2025-12-01T02:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.221184 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.221300 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.221324 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.221354 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.221377 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:55Z","lastTransitionTime":"2025-12-01T02:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.324204 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.324272 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.324293 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.324326 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.324346 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:55Z","lastTransitionTime":"2025-12-01T02:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.427707 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.427760 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.427776 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.427798 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.427816 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:55Z","lastTransitionTime":"2025-12-01T02:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.530604 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.530720 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.530745 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.530769 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.530789 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:55Z","lastTransitionTime":"2025-12-01T02:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.633240 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.633381 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.633408 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.633433 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.633449 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:55Z","lastTransitionTime":"2025-12-01T02:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.648131 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.648181 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.648198 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.648219 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.648235 4880 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T02:57:55Z","lastTransitionTime":"2025-12-01T02:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.717470 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq"] Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.718044 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.724133 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.724448 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.724837 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.725921 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.771206 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.771174263 podStartE2EDuration="57.771174263s" podCreationTimestamp="2025-12-01 02:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:55.748180702 +0000 UTC m=+105.259435114" watchObservedRunningTime="2025-12-01 02:57:55.771174263 +0000 UTC m=+105.282428665" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.783831 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.783944 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.783857 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:55 crc kubenswrapper[4880]: E1201 02:57:55.784063 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:55 crc kubenswrapper[4880]: E1201 02:57:55.784203 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:55 crc kubenswrapper[4880]: E1201 02:57:55.784344 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.815330 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9899k" podStartSLOduration=86.815310689 podStartE2EDuration="1m26.815310689s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:55.787409201 +0000 UTC m=+105.298663603" watchObservedRunningTime="2025-12-01 02:57:55.815310689 +0000 UTC m=+105.326565071" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.815489 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-w7bw7" podStartSLOduration=86.815485414 podStartE2EDuration="1m26.815485414s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:55.815451463 +0000 UTC m=+105.326705885" watchObservedRunningTime="2025-12-01 02:57:55.815485414 +0000 UTC m=+105.326739796" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.858395 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=29.858375301 podStartE2EDuration="29.858375301s" podCreationTimestamp="2025-12-01 02:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:55.832995653 +0000 UTC m=+105.344250055" watchObservedRunningTime="2025-12-01 02:57:55.858375301 +0000 UTC m=+105.369629683" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.882018 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-sqgcx" podStartSLOduration=85.881999986 podStartE2EDuration="1m25.881999986s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:55.858623246 +0000 UTC m=+105.369877658" watchObservedRunningTime="2025-12-01 02:57:55.881999986 +0000 UTC m=+105.393254358" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.905373 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.905457 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.905480 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.905498 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:55 crc kubenswrapper[4880]: I1201 02:57:55.905515 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.004452 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-ctqw4" podStartSLOduration=86.004435608 podStartE2EDuration="1m26.004435608s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:55.995085654 +0000 UTC m=+105.506340026" watchObservedRunningTime="2025-12-01 02:57:56.004435608 +0000 UTC m=+105.515689980" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.006564 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.006617 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.006633 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.006650 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.006665 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.006728 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.006942 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.008255 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.018286 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.024494 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1b6f45c-712a-4c8a-b65b-4cf1a2da980b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pbclq\" (UID: \"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.032819 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.032803857 podStartE2EDuration="1m27.032803857s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:56.019475368 +0000 UTC m=+105.530729740" watchObservedRunningTime="2025-12-01 02:57:56.032803857 +0000 UTC m=+105.544058229" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.042306 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.053226 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.053213576 podStartE2EDuration="1m27.053213576s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:56.033394231 +0000 UTC m=+105.544648603" watchObservedRunningTime="2025-12-01 02:57:56.053213576 +0000 UTC m=+105.564467948" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.083647 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podStartSLOduration=87.083632554 podStartE2EDuration="1m27.083632554s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:56.069033195 +0000 UTC m=+105.580287587" watchObservedRunningTime="2025-12-01 02:57:56.083632554 +0000 UTC m=+105.594886926" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.083733 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5znrt" podStartSLOduration=87.083727886 podStartE2EDuration="1m27.083727886s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:56.083125982 +0000 UTC m=+105.594380374" watchObservedRunningTime="2025-12-01 02:57:56.083727886 +0000 UTC m=+105.594982258" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.108022 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=84.108005418 podStartE2EDuration="1m24.108005418s" podCreationTimestamp="2025-12-01 02:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:56.107523236 +0000 UTC m=+105.618777618" watchObservedRunningTime="2025-12-01 02:57:56.108005418 +0000 UTC m=+105.619259790" Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.466106 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" event={"ID":"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b","Type":"ContainerStarted","Data":"15c56fec62db5317814ff05b13e18c07fcc304b803a34de9ada6fb336ed9aa3e"} Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.466179 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" event={"ID":"d1b6f45c-712a-4c8a-b65b-4cf1a2da980b","Type":"ContainerStarted","Data":"6863370e3f201f63e5ccbef75a1c18c203772bca102f351c69abb93d1a99a9f0"} Dec 01 02:57:56 crc kubenswrapper[4880]: I1201 02:57:56.783002 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:56 crc kubenswrapper[4880]: E1201 02:57:56.783172 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:57 crc kubenswrapper[4880]: I1201 02:57:57.783390 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:57 crc kubenswrapper[4880]: I1201 02:57:57.783562 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:57 crc kubenswrapper[4880]: E1201 02:57:57.783820 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:57:57 crc kubenswrapper[4880]: I1201 02:57:57.783905 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:57 crc kubenswrapper[4880]: E1201 02:57:57.784201 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:57 crc kubenswrapper[4880]: E1201 02:57:57.784080 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:58 crc kubenswrapper[4880]: I1201 02:57:58.783297 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:57:58 crc kubenswrapper[4880]: E1201 02:57:58.783797 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:57:59 crc kubenswrapper[4880]: I1201 02:57:59.783356 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:57:59 crc kubenswrapper[4880]: I1201 02:57:59.783356 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:57:59 crc kubenswrapper[4880]: E1201 02:57:59.783806 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:57:59 crc kubenswrapper[4880]: I1201 02:57:59.783439 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:57:59 crc kubenswrapper[4880]: E1201 02:57:59.784115 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:57:59 crc kubenswrapper[4880]: E1201 02:57:59.785116 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:00 crc kubenswrapper[4880]: I1201 02:58:00.786164 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:00 crc kubenswrapper[4880]: E1201 02:58:00.788348 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:00 crc kubenswrapper[4880]: I1201 02:58:00.789511 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 02:58:00 crc kubenswrapper[4880]: E1201 02:58:00.789988 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:58:01 crc kubenswrapper[4880]: I1201 02:58:01.782825 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:01 crc kubenswrapper[4880]: I1201 02:58:01.782860 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:01 crc kubenswrapper[4880]: I1201 02:58:01.783002 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:01 crc kubenswrapper[4880]: E1201 02:58:01.783299 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:01 crc kubenswrapper[4880]: E1201 02:58:01.783359 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:01 crc kubenswrapper[4880]: E1201 02:58:01.783188 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:02 crc kubenswrapper[4880]: I1201 02:58:02.784176 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:02 crc kubenswrapper[4880]: E1201 02:58:02.784648 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.493513 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/1.log" Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.494396 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/0.log" Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.494474 4880 generic.go:334] "Generic (PLEG): container finished" podID="6366d207-93fa-4b9f-ae70-0bab0b293db3" containerID="9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767" exitCode=1 Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.494522 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5znrt" event={"ID":"6366d207-93fa-4b9f-ae70-0bab0b293db3","Type":"ContainerDied","Data":"9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767"} Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.494573 4880 scope.go:117] "RemoveContainer" containerID="1fbd0d7813ce7b6655ceb1dee8645a25afe41f46427d5589f36c1842342baa9c" Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.495162 4880 scope.go:117] "RemoveContainer" containerID="9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767" Dec 01 02:58:03 crc kubenswrapper[4880]: E1201 02:58:03.495519 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5znrt_openshift-multus(6366d207-93fa-4b9f-ae70-0bab0b293db3)\"" pod="openshift-multus/multus-5znrt" podUID="6366d207-93fa-4b9f-ae70-0bab0b293db3" Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.526135 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pbclq" podStartSLOduration=94.526103654 podStartE2EDuration="1m34.526103654s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:57:56.485831734 +0000 UTC m=+105.997086166" watchObservedRunningTime="2025-12-01 02:58:03.526103654 +0000 UTC m=+113.037358076" Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.783316 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.783341 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:03 crc kubenswrapper[4880]: E1201 02:58:03.783472 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:03 crc kubenswrapper[4880]: E1201 02:58:03.783638 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:03 crc kubenswrapper[4880]: I1201 02:58:03.783336 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:03 crc kubenswrapper[4880]: E1201 02:58:03.783776 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:04 crc kubenswrapper[4880]: I1201 02:58:04.501364 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/1.log" Dec 01 02:58:04 crc kubenswrapper[4880]: I1201 02:58:04.783947 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:04 crc kubenswrapper[4880]: E1201 02:58:04.784228 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:05 crc kubenswrapper[4880]: I1201 02:58:05.783485 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:05 crc kubenswrapper[4880]: E1201 02:58:05.783676 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:05 crc kubenswrapper[4880]: I1201 02:58:05.783813 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:05 crc kubenswrapper[4880]: I1201 02:58:05.784692 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:05 crc kubenswrapper[4880]: E1201 02:58:05.784826 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:05 crc kubenswrapper[4880]: E1201 02:58:05.785457 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:06 crc kubenswrapper[4880]: I1201 02:58:06.783106 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:06 crc kubenswrapper[4880]: E1201 02:58:06.783277 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:07 crc kubenswrapper[4880]: I1201 02:58:07.806711 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:07 crc kubenswrapper[4880]: I1201 02:58:07.806763 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:07 crc kubenswrapper[4880]: I1201 02:58:07.806717 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:07 crc kubenswrapper[4880]: E1201 02:58:07.806910 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:07 crc kubenswrapper[4880]: E1201 02:58:07.806996 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:07 crc kubenswrapper[4880]: E1201 02:58:07.807090 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:08 crc kubenswrapper[4880]: I1201 02:58:08.783300 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:08 crc kubenswrapper[4880]: E1201 02:58:08.783851 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:09 crc kubenswrapper[4880]: I1201 02:58:09.784066 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:09 crc kubenswrapper[4880]: I1201 02:58:09.784179 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:09 crc kubenswrapper[4880]: E1201 02:58:09.784228 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:09 crc kubenswrapper[4880]: I1201 02:58:09.784269 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:09 crc kubenswrapper[4880]: E1201 02:58:09.784427 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:09 crc kubenswrapper[4880]: E1201 02:58:09.784595 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:10 crc kubenswrapper[4880]: E1201 02:58:10.723758 4880 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 01 02:58:10 crc kubenswrapper[4880]: I1201 02:58:10.784133 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:10 crc kubenswrapper[4880]: E1201 02:58:10.785950 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:10 crc kubenswrapper[4880]: E1201 02:58:10.883657 4880 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 02:58:11 crc kubenswrapper[4880]: I1201 02:58:11.783699 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:11 crc kubenswrapper[4880]: I1201 02:58:11.783810 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:11 crc kubenswrapper[4880]: E1201 02:58:11.783858 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:11 crc kubenswrapper[4880]: I1201 02:58:11.783699 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:11 crc kubenswrapper[4880]: E1201 02:58:11.784025 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:11 crc kubenswrapper[4880]: E1201 02:58:11.784230 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:12 crc kubenswrapper[4880]: I1201 02:58:12.783469 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:12 crc kubenswrapper[4880]: E1201 02:58:12.784162 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:12 crc kubenswrapper[4880]: I1201 02:58:12.784580 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 02:58:12 crc kubenswrapper[4880]: E1201 02:58:12.784837 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-52bx6_openshift-ovn-kubernetes(9e4d730b-5ca7-46cf-a62a-3c4a54bc1697)\"" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" Dec 01 02:58:13 crc kubenswrapper[4880]: I1201 02:58:13.783595 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:13 crc kubenswrapper[4880]: I1201 02:58:13.783672 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:13 crc kubenswrapper[4880]: E1201 02:58:13.783787 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:13 crc kubenswrapper[4880]: E1201 02:58:13.783973 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:13 crc kubenswrapper[4880]: I1201 02:58:13.784382 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:13 crc kubenswrapper[4880]: E1201 02:58:13.784653 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:14 crc kubenswrapper[4880]: I1201 02:58:14.784049 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:14 crc kubenswrapper[4880]: E1201 02:58:14.784203 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:15 crc kubenswrapper[4880]: I1201 02:58:15.783371 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:15 crc kubenswrapper[4880]: I1201 02:58:15.783371 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:15 crc kubenswrapper[4880]: E1201 02:58:15.783539 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:15 crc kubenswrapper[4880]: I1201 02:58:15.783680 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:15 crc kubenswrapper[4880]: E1201 02:58:15.783791 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:15 crc kubenswrapper[4880]: E1201 02:58:15.783995 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:15 crc kubenswrapper[4880]: E1201 02:58:15.885697 4880 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 02:58:16 crc kubenswrapper[4880]: I1201 02:58:16.783952 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:16 crc kubenswrapper[4880]: E1201 02:58:16.784136 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:17 crc kubenswrapper[4880]: I1201 02:58:17.783761 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:17 crc kubenswrapper[4880]: I1201 02:58:17.783850 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:17 crc kubenswrapper[4880]: E1201 02:58:17.784101 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:17 crc kubenswrapper[4880]: I1201 02:58:17.784146 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:17 crc kubenswrapper[4880]: E1201 02:58:17.784310 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:17 crc kubenswrapper[4880]: E1201 02:58:17.784571 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:18 crc kubenswrapper[4880]: I1201 02:58:18.784325 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:18 crc kubenswrapper[4880]: E1201 02:58:18.784536 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:18 crc kubenswrapper[4880]: I1201 02:58:18.784725 4880 scope.go:117] "RemoveContainer" containerID="9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767" Dec 01 02:58:19 crc kubenswrapper[4880]: I1201 02:58:19.562198 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/1.log" Dec 01 02:58:19 crc kubenswrapper[4880]: I1201 02:58:19.562586 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5znrt" event={"ID":"6366d207-93fa-4b9f-ae70-0bab0b293db3","Type":"ContainerStarted","Data":"2ad7d3e3ac06f8f38927fe3579d053bcdf1b0eb2b14c23f65e4968eb708a8a38"} Dec 01 02:58:19 crc kubenswrapper[4880]: I1201 02:58:19.783267 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:19 crc kubenswrapper[4880]: I1201 02:58:19.783349 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:19 crc kubenswrapper[4880]: E1201 02:58:19.783438 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:19 crc kubenswrapper[4880]: E1201 02:58:19.783535 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:19 crc kubenswrapper[4880]: I1201 02:58:19.783369 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:19 crc kubenswrapper[4880]: E1201 02:58:19.783655 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:20 crc kubenswrapper[4880]: I1201 02:58:20.783559 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:20 crc kubenswrapper[4880]: E1201 02:58:20.785309 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:20 crc kubenswrapper[4880]: E1201 02:58:20.887042 4880 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 02:58:21 crc kubenswrapper[4880]: I1201 02:58:21.783756 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:21 crc kubenswrapper[4880]: I1201 02:58:21.783946 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:21 crc kubenswrapper[4880]: E1201 02:58:21.784007 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:21 crc kubenswrapper[4880]: I1201 02:58:21.784106 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:21 crc kubenswrapper[4880]: E1201 02:58:21.784173 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:21 crc kubenswrapper[4880]: E1201 02:58:21.784288 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:22 crc kubenswrapper[4880]: I1201 02:58:22.783708 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:22 crc kubenswrapper[4880]: E1201 02:58:22.783966 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:23 crc kubenswrapper[4880]: I1201 02:58:23.783961 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:23 crc kubenswrapper[4880]: I1201 02:58:23.783993 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:23 crc kubenswrapper[4880]: E1201 02:58:23.784147 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:23 crc kubenswrapper[4880]: I1201 02:58:23.784240 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:23 crc kubenswrapper[4880]: E1201 02:58:23.784388 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:23 crc kubenswrapper[4880]: E1201 02:58:23.785152 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:23 crc kubenswrapper[4880]: I1201 02:58:23.785703 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 02:58:24 crc kubenswrapper[4880]: I1201 02:58:24.580958 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/3.log" Dec 01 02:58:24 crc kubenswrapper[4880]: I1201 02:58:24.584000 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerStarted","Data":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} Dec 01 02:58:24 crc kubenswrapper[4880]: I1201 02:58:24.584476 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:58:24 crc kubenswrapper[4880]: I1201 02:58:24.611104 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podStartSLOduration=115.611086876 podStartE2EDuration="1m55.611086876s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:24.608943507 +0000 UTC m=+134.120197899" watchObservedRunningTime="2025-12-01 02:58:24.611086876 +0000 UTC m=+134.122341248" Dec 01 02:58:24 crc kubenswrapper[4880]: I1201 02:58:24.767219 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-chtvv"] Dec 01 02:58:24 crc kubenswrapper[4880]: I1201 02:58:24.767341 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:24 crc kubenswrapper[4880]: E1201 02:58:24.767439 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:25 crc kubenswrapper[4880]: I1201 02:58:25.783616 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:25 crc kubenswrapper[4880]: I1201 02:58:25.783684 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:25 crc kubenswrapper[4880]: I1201 02:58:25.783692 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:25 crc kubenswrapper[4880]: E1201 02:58:25.784127 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:25 crc kubenswrapper[4880]: E1201 02:58:25.784226 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:25 crc kubenswrapper[4880]: E1201 02:58:25.784374 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:25 crc kubenswrapper[4880]: E1201 02:58:25.888749 4880 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 02:58:26 crc kubenswrapper[4880]: I1201 02:58:26.783898 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:26 crc kubenswrapper[4880]: E1201 02:58:26.784136 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:27 crc kubenswrapper[4880]: I1201 02:58:27.783334 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:27 crc kubenswrapper[4880]: I1201 02:58:27.783334 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:27 crc kubenswrapper[4880]: I1201 02:58:27.783351 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:27 crc kubenswrapper[4880]: E1201 02:58:27.783548 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:27 crc kubenswrapper[4880]: E1201 02:58:27.783703 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:27 crc kubenswrapper[4880]: E1201 02:58:27.783958 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:28 crc kubenswrapper[4880]: I1201 02:58:28.783362 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:28 crc kubenswrapper[4880]: E1201 02:58:28.783552 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:29 crc kubenswrapper[4880]: I1201 02:58:29.783744 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:29 crc kubenswrapper[4880]: E1201 02:58:29.783935 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 02:58:29 crc kubenswrapper[4880]: I1201 02:58:29.784259 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:29 crc kubenswrapper[4880]: E1201 02:58:29.784355 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 02:58:29 crc kubenswrapper[4880]: I1201 02:58:29.784552 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:29 crc kubenswrapper[4880]: E1201 02:58:29.784657 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 02:58:30 crc kubenswrapper[4880]: I1201 02:58:30.783818 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:30 crc kubenswrapper[4880]: E1201 02:58:30.786647 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-chtvv" podUID="60f88b82-c5e9-4f47-91c1-4e78498b481e" Dec 01 02:58:31 crc kubenswrapper[4880]: I1201 02:58:31.783926 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:31 crc kubenswrapper[4880]: I1201 02:58:31.784142 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:31 crc kubenswrapper[4880]: I1201 02:58:31.786118 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:31 crc kubenswrapper[4880]: I1201 02:58:31.786182 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 02:58:31 crc kubenswrapper[4880]: I1201 02:58:31.788291 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 02:58:31 crc kubenswrapper[4880]: I1201 02:58:31.788770 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 02:58:31 crc kubenswrapper[4880]: I1201 02:58:31.789168 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 02:58:32 crc kubenswrapper[4880]: I1201 02:58:32.783693 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:32 crc kubenswrapper[4880]: I1201 02:58:32.787109 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 02:58:32 crc kubenswrapper[4880]: I1201 02:58:32.790926 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.334895 4880 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.392831 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2br2c"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.393349 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.394765 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qb4wv"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.395221 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qb4wv" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.396530 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.397004 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.398092 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mxnkp"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.398637 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.401539 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tzxfr"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.402193 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.419209 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.422381 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.426706 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.426680 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.426750 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.426993 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.427749 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.428137 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.428772 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.428825 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.428829 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.428862 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.429993 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.430317 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.430371 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sdrzn"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.430602 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.430668 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.430813 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.431176 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.431420 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.431631 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.431693 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.431700 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.431799 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.431953 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432054 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432256 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432412 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432314 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432585 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432595 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432706 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432750 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432781 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.432999 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.433597 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.434881 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.435308 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.438973 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.439423 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.440974 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.441326 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.441681 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.442009 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.444836 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vmhg8"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.445234 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-d8pwf"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.445576 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.446043 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.447255 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.448561 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xx59g"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.448943 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.450428 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.451051 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.473255 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.477211 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.477719 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.485684 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.490983 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.491423 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.491584 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493037 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493544 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/183bf44d-c621-4f91-8ddc-10093cfc2596-serving-cert\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493571 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ed088c8-ac9c-4a17-940c-4ebcb22be231-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tt724\" (UID: \"7ed088c8-ac9c-4a17-940c-4ebcb22be231\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493598 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493618 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzb7d\" (UniqueName: \"kubernetes.io/projected/3621b23f-4e41-4a02-b456-7206682db44f-kube-api-access-xzb7d\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493636 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3621b23f-4e41-4a02-b456-7206682db44f-etcd-service-ca\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493650 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed088c8-ac9c-4a17-940c-4ebcb22be231-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tt724\" (UID: \"7ed088c8-ac9c-4a17-940c-4ebcb22be231\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493666 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3621b23f-4e41-4a02-b456-7206682db44f-config\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493684 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srvmz\" (UniqueName: \"kubernetes.io/projected/7ed088c8-ac9c-4a17-940c-4ebcb22be231-kube-api-access-srvmz\") pod \"openshift-apiserver-operator-796bbdcf4f-tt724\" (UID: \"7ed088c8-ac9c-4a17-940c-4ebcb22be231\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493722 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8m6j\" (UniqueName: \"kubernetes.io/projected/183bf44d-c621-4f91-8ddc-10093cfc2596-kube-api-access-q8m6j\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493739 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vsdf\" (UniqueName: \"kubernetes.io/projected/865e44df-b483-40e5-9a4f-d78fce50d532-kube-api-access-8vsdf\") pod \"downloads-7954f5f757-qb4wv\" (UID: \"865e44df-b483-40e5-9a4f-d78fce50d532\") " pod="openshift-console/downloads-7954f5f757-qb4wv" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493761 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-client-ca\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493778 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3621b23f-4e41-4a02-b456-7206682db44f-serving-cert\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493793 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3621b23f-4e41-4a02-b456-7206682db44f-etcd-ca\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493807 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3621b23f-4e41-4a02-b456-7206682db44f-etcd-client\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.493824 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-config\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.495647 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.495714 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.496191 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qcvrn"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.496473 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.496789 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.497225 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.500287 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.500526 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501031 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501118 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501192 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.500542 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501277 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501352 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501479 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501494 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501583 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501640 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501665 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501725 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501754 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501814 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501899 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501952 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.502012 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.502038 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.502048 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.502142 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.502163 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.502234 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.502280 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.503169 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.503287 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.503490 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.501584 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.503739 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.503920 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.503929 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.504016 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.504028 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.504470 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.504597 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.504609 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.504744 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.504846 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.504967 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.505045 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.506053 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zld26"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.507902 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.507917 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.508013 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.510636 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.510894 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.510971 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.511136 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-htspn"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.511488 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.511646 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.516714 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.517276 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.517569 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-62v7v"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.518066 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524210 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524290 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524475 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524517 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524598 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524634 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524476 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524730 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524795 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524853 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524976 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.524980 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.525104 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.525172 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.525107 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.525538 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.528363 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgdhn"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.528673 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.528929 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lf8k7"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.529230 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.529691 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.529951 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.548649 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.554791 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.569151 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.569711 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.569787 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.570055 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.570345 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.570558 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.570821 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.572620 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tt6fz"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.573133 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.573438 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-twglb"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.573899 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.574042 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.574219 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.580413 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.580816 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.581787 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.581982 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.582340 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.582422 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.582784 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q4j7p"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.582901 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.583526 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.584923 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.585468 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.586671 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.587402 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.590386 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.592677 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2br2c"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.592718 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sdrzn"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594150 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tzxfr"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594559 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3621b23f-4e41-4a02-b456-7206682db44f-config\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594604 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzx5t\" (UniqueName: \"kubernetes.io/projected/79af8363-2911-45e0-9b07-3421b2626de0-kube-api-access-kzx5t\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594627 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95ce7326-f487-4c1e-80e9-cc39e4af2708-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594643 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594691 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tpvf\" (UniqueName: \"kubernetes.io/projected/59b10596-74c6-4b9f-aaa9-69d60015a048-kube-api-access-7tpvf\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594710 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-serving-cert\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594724 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-oauth-config\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594739 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq65s\" (UniqueName: \"kubernetes.io/projected/b3164eb7-c85e-4eaa-8318-b887832da2e5-kube-api-access-cq65s\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594776 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srvmz\" (UniqueName: \"kubernetes.io/projected/7ed088c8-ac9c-4a17-940c-4ebcb22be231-kube-api-access-srvmz\") pod \"openshift-apiserver-operator-796bbdcf4f-tt724\" (UID: \"7ed088c8-ac9c-4a17-940c-4ebcb22be231\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594791 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a333682a-02d1-4a1e-900e-7a78e7b67317-service-ca-bundle\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594808 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e86e67-dde6-4b7c-883b-ce22eb444299-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wffb2\" (UID: \"76e86e67-dde6-4b7c-883b-ce22eb444299\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594853 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-config\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594904 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-audit\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594922 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95ce7326-f487-4c1e-80e9-cc39e4af2708-encryption-config\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594936 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95ce7326-f487-4c1e-80e9-cc39e4af2708-audit-dir\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594973 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/59b10596-74c6-4b9f-aaa9-69d60015a048-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.594989 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95ce7326-f487-4c1e-80e9-cc39e4af2708-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595004 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79af8363-2911-45e0-9b07-3421b2626de0-etcd-client\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595019 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3164eb7-c85e-4eaa-8318-b887832da2e5-trusted-ca\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595052 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8m6j\" (UniqueName: \"kubernetes.io/projected/183bf44d-c621-4f91-8ddc-10093cfc2596-kube-api-access-q8m6j\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595069 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vsdf\" (UniqueName: \"kubernetes.io/projected/865e44df-b483-40e5-9a4f-d78fce50d532-kube-api-access-8vsdf\") pod \"downloads-7954f5f757-qb4wv\" (UID: \"865e44df-b483-40e5-9a4f-d78fce50d532\") " pod="openshift-console/downloads-7954f5f757-qb4wv" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595087 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbf8g\" (UniqueName: \"kubernetes.io/projected/95ce7326-f487-4c1e-80e9-cc39e4af2708-kube-api-access-jbf8g\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595135 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6df9r\" (UniqueName: \"kubernetes.io/projected/14fe6389-fa84-4ec6-8891-0a379e0d4f29-kube-api-access-6df9r\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595158 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-client-ca\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595172 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-auth-proxy-config\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595205 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595222 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78lns\" (UniqueName: \"kubernetes.io/projected/d48b50d9-30fa-455f-b3b3-5c781089871f-kube-api-access-78lns\") pod \"openshift-config-operator-7777fb866f-n6tkj\" (UID: \"d48b50d9-30fa-455f-b3b3-5c781089871f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595237 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3164eb7-c85e-4eaa-8318-b887832da2e5-serving-cert\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595251 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rz5w\" (UniqueName: \"kubernetes.io/projected/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-kube-api-access-4rz5w\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595287 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmx8t\" (UniqueName: \"kubernetes.io/projected/61db3a36-06c6-43f4-b78c-90dbb61eb095-kube-api-access-zmx8t\") pod \"cluster-samples-operator-665b6dd947-x44c7\" (UID: \"61db3a36-06c6-43f4-b78c-90dbb61eb095\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595303 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-oauth-serving-cert\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595317 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3164eb7-c85e-4eaa-8318-b887832da2e5-config\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595333 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3621b23f-4e41-4a02-b456-7206682db44f-serving-cert\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595366 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3621b23f-4e41-4a02-b456-7206682db44f-etcd-ca\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595381 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/59b10596-74c6-4b9f-aaa9-69d60015a048-images\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595395 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-service-ca\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595413 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3621b23f-4e41-4a02-b456-7206682db44f-etcd-client\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595445 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8mn\" (UniqueName: \"kubernetes.io/projected/bdeead84-dfe5-482c-af20-6bad5984c7bf-kube-api-access-bv8mn\") pod \"dns-operator-744455d44c-xx59g\" (UID: \"bdeead84-dfe5-482c-af20-6bad5984c7bf\") " pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595462 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e86e67-dde6-4b7c-883b-ce22eb444299-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wffb2\" (UID: \"76e86e67-dde6-4b7c-883b-ce22eb444299\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595477 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-client-ca\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595494 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-config\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595525 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-console-config\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595542 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/183bf44d-c621-4f91-8ddc-10093cfc2596-serving-cert\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595558 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ed088c8-ac9c-4a17-940c-4ebcb22be231-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tt724\" (UID: \"7ed088c8-ac9c-4a17-940c-4ebcb22be231\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595572 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48b50d9-30fa-455f-b3b3-5c781089871f-serving-cert\") pod \"openshift-config-operator-7777fb866f-n6tkj\" (UID: \"d48b50d9-30fa-455f-b3b3-5c781089871f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595612 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-trusted-ca-bundle\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595625 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-config\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595639 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-image-import-ca\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595669 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/79af8363-2911-45e0-9b07-3421b2626de0-encryption-config\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595685 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bdeead84-dfe5-482c-af20-6bad5984c7bf-metrics-tls\") pod \"dns-operator-744455d44c-xx59g\" (UID: \"bdeead84-dfe5-482c-af20-6bad5984c7bf\") " pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595699 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95ce7326-f487-4c1e-80e9-cc39e4af2708-etcd-client\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595713 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fe6389-fa84-4ec6-8891-0a379e0d4f29-serving-cert\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595729 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595759 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79af8363-2911-45e0-9b07-3421b2626de0-serving-cert\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595775 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a333682a-02d1-4a1e-900e-7a78e7b67317-config\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595792 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5vw\" (UniqueName: \"kubernetes.io/projected/a333682a-02d1-4a1e-900e-7a78e7b67317-kube-api-access-cj5vw\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595822 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsvnp\" (UniqueName: \"kubernetes.io/projected/76e86e67-dde6-4b7c-883b-ce22eb444299-kube-api-access-jsvnp\") pod \"openshift-controller-manager-operator-756b6f6bc6-wffb2\" (UID: \"76e86e67-dde6-4b7c-883b-ce22eb444299\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595841 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95ce7326-f487-4c1e-80e9-cc39e4af2708-audit-policies\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595859 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzb7d\" (UniqueName: \"kubernetes.io/projected/3621b23f-4e41-4a02-b456-7206682db44f-kube-api-access-xzb7d\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595893 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-machine-approver-tls\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595910 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-etcd-serving-ca\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595925 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a333682a-02d1-4a1e-900e-7a78e7b67317-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595939 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b10596-74c6-4b9f-aaa9-69d60015a048-config\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595970 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqtks\" (UniqueName: \"kubernetes.io/projected/747403d3-576b-4621-8cb3-b9122348ec98-kube-api-access-pqtks\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.595985 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqcnw\" (UniqueName: \"kubernetes.io/projected/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-kube-api-access-jqcnw\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596001 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-config\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596016 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596053 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3621b23f-4e41-4a02-b456-7206682db44f-etcd-service-ca\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596078 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed088c8-ac9c-4a17-940c-4ebcb22be231-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tt724\" (UID: \"7ed088c8-ac9c-4a17-940c-4ebcb22be231\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596097 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/79af8363-2911-45e0-9b07-3421b2626de0-audit-dir\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596131 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a333682a-02d1-4a1e-900e-7a78e7b67317-serving-cert\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596146 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d48b50d9-30fa-455f-b3b3-5c781089871f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n6tkj\" (UID: \"d48b50d9-30fa-455f-b3b3-5c781089871f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596164 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/79af8363-2911-45e0-9b07-3421b2626de0-node-pullsecrets\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596179 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/61db3a36-06c6-43f4-b78c-90dbb61eb095-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x44c7\" (UID: \"61db3a36-06c6-43f4-b78c-90dbb61eb095\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596210 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596227 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95ce7326-f487-4c1e-80e9-cc39e4af2708-serving-cert\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.596903 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.597022 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3621b23f-4e41-4a02-b456-7206682db44f-config\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.599559 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-client-ca\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.599707 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.600221 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3621b23f-4e41-4a02-b456-7206682db44f-etcd-service-ca\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.600606 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3621b23f-4e41-4a02-b456-7206682db44f-etcd-ca\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.600693 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-config\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.600741 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed088c8-ac9c-4a17-940c-4ebcb22be231-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tt724\" (UID: \"7ed088c8-ac9c-4a17-940c-4ebcb22be231\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.601512 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.601739 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.604405 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3621b23f-4e41-4a02-b456-7206682db44f-serving-cert\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.621749 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mxnkp"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.622866 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.623883 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zld26"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.624285 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3621b23f-4e41-4a02-b456-7206682db44f-etcd-client\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.630240 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.630431 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/183bf44d-c621-4f91-8ddc-10093cfc2596-serving-cert\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.631524 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgdhn"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.632290 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.633972 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.634183 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.636155 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ed088c8-ac9c-4a17-940c-4ebcb22be231-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tt724\" (UID: \"7ed088c8-ac9c-4a17-940c-4ebcb22be231\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.641515 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qb4wv"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.642561 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.646283 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-d8pwf"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.646325 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xx59g"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.646334 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.648440 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.649404 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qcvrn"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.651897 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.659137 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.659167 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xj7f7"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.662346 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j5tcx"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.662965 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.666159 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-9dvqb"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.666383 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.667288 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.668805 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.668900 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.670270 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.672081 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.674488 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.675371 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.675953 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vmhg8"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.677604 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-htspn"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.680385 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q4j7p"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.681903 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.683380 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.685057 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lf8k7"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.687097 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.688167 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.689230 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tt6fz"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.690639 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.692154 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-twglb"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.694195 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xj7f7"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.694630 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.695333 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7wlgm"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.696372 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7wlgm" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.696861 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j5tcx"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697082 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f0c30e2-825b-40cc-8c89-f454853ded08-metrics-tls\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697128 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbf8g\" (UniqueName: \"kubernetes.io/projected/95ce7326-f487-4c1e-80e9-cc39e4af2708-kube-api-access-jbf8g\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697150 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-dir\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697378 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6ns\" (UniqueName: \"kubernetes.io/projected/86a373c4-7ca5-4e3e-91a7-4dae1241a7fb-kube-api-access-fv6ns\") pod \"package-server-manager-789f6589d5-4npfs\" (UID: \"86a373c4-7ca5-4e3e-91a7-4dae1241a7fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697406 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6df9r\" (UniqueName: \"kubernetes.io/projected/14fe6389-fa84-4ec6-8891-0a379e0d4f29-kube-api-access-6df9r\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697426 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-auth-proxy-config\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697495 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l8bx\" (UniqueName: \"kubernetes.io/projected/8a71431a-c15f-457e-9058-577e362c8f8a-kube-api-access-4l8bx\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697513 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697547 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78lns\" (UniqueName: \"kubernetes.io/projected/d48b50d9-30fa-455f-b3b3-5c781089871f-kube-api-access-78lns\") pod \"openshift-config-operator-7777fb866f-n6tkj\" (UID: \"d48b50d9-30fa-455f-b3b3-5c781089871f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697573 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3164eb7-c85e-4eaa-8318-b887832da2e5-serving-cert\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697597 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7p8p\" (UniqueName: \"kubernetes.io/projected/316090ee-bdeb-4d02-aee1-6734a421c126-kube-api-access-f7p8p\") pod \"control-plane-machine-set-operator-78cbb6b69f-khjz7\" (UID: \"316090ee-bdeb-4d02-aee1-6734a421c126\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697614 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwtq\" (UniqueName: \"kubernetes.io/projected/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-kube-api-access-jjwtq\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697631 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rz5w\" (UniqueName: \"kubernetes.io/projected/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-kube-api-access-4rz5w\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697650 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3164eb7-c85e-4eaa-8318-b887832da2e5-config\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697668 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-oauth-serving-cert\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697685 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/59b10596-74c6-4b9f-aaa9-69d60015a048-images\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697703 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2fsq\" (UniqueName: \"kubernetes.io/projected/2c6e35c8-2541-40a3-8d9e-de756d5b821a-kube-api-access-n2fsq\") pod \"multus-admission-controller-857f4d67dd-q4j7p\" (UID: \"2c6e35c8-2541-40a3-8d9e-de756d5b821a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697722 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-client-ca\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697738 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939a734d-ecb7-43f1-a7be-e05668e0cc32-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gns66\" (UID: \"939a734d-ecb7-43f1-a7be-e05668e0cc32\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697756 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697782 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-console-config\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697799 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48b50d9-30fa-455f-b3b3-5c781089871f-serving-cert\") pod \"openshift-config-operator-7777fb866f-n6tkj\" (UID: \"d48b50d9-30fa-455f-b3b3-5c781089871f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697815 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-trusted-ca-bundle\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697834 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-config\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697888 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7wlgm"] Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697906 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-image-import-ca\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697948 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf25k\" (UniqueName: \"kubernetes.io/projected/6a21c7c7-1f54-4cfe-af87-11e397fead60-kube-api-access-hf25k\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.697982 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fe6389-fa84-4ec6-8891-0a379e0d4f29-serving-cert\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698007 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a333682a-02d1-4a1e-900e-7a78e7b67317-config\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698034 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsvnp\" (UniqueName: \"kubernetes.io/projected/76e86e67-dde6-4b7c-883b-ce22eb444299-kube-api-access-jsvnp\") pod \"openshift-controller-manager-operator-756b6f6bc6-wffb2\" (UID: \"76e86e67-dde6-4b7c-883b-ce22eb444299\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698058 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a71431a-c15f-457e-9058-577e362c8f8a-webhook-cert\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698075 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-etcd-serving-ca\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698091 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a333682a-02d1-4a1e-900e-7a78e7b67317-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698093 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-auth-proxy-config\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698113 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqtks\" (UniqueName: \"kubernetes.io/projected/747403d3-576b-4621-8cb3-b9122348ec98-kube-api-access-pqtks\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698137 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-config\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698159 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698177 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698194 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698212 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a333682a-02d1-4a1e-900e-7a78e7b67317-serving-cert\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698232 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d48b50d9-30fa-455f-b3b3-5c781089871f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n6tkj\" (UID: \"d48b50d9-30fa-455f-b3b3-5c781089871f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698251 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/61db3a36-06c6-43f4-b78c-90dbb61eb095-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x44c7\" (UID: \"61db3a36-06c6-43f4-b78c-90dbb61eb095\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698269 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698289 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7cca742-7184-431c-b480-273f5fbe6dba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ctn57\" (UID: \"d7cca742-7184-431c-b480-273f5fbe6dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698306 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698323 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/57f8716f-6b5d-4f01-9341-674dba56876a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m6p9n\" (UID: \"57f8716f-6b5d-4f01-9341-674dba56876a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698359 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698738 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.698991 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-config\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.699033 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-config\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.699242 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3164eb7-c85e-4eaa-8318-b887832da2e5-config\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.699414 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a333682a-02d1-4a1e-900e-7a78e7b67317-config\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.699468 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-serving-cert\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.699499 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.699770 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-etcd-serving-ca\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.700140 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-oauth-serving-cert\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.700255 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-trusted-ca-bundle\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.700412 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a333682a-02d1-4a1e-900e-7a78e7b67317-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.700424 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3164eb7-c85e-4eaa-8318-b887832da2e5-serving-cert\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.700937 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-client-ca\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.701085 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-console-config\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.701234 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d48b50d9-30fa-455f-b3b3-5c781089871f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n6tkj\" (UID: \"d48b50d9-30fa-455f-b3b3-5c781089871f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.701346 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.702196 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/59b10596-74c6-4b9f-aaa9-69d60015a048-images\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.702800 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-image-import-ca\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.702839 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e86e67-dde6-4b7c-883b-ce22eb444299-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wffb2\" (UID: \"76e86e67-dde6-4b7c-883b-ce22eb444299\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.702941 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703158 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703322 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-config\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703359 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zjx\" (UniqueName: \"kubernetes.io/projected/6f0c30e2-825b-40cc-8c89-f454853ded08-kube-api-access-q9zjx\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703383 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95ce7326-f487-4c1e-80e9-cc39e4af2708-audit-dir\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703407 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939a734d-ecb7-43f1-a7be-e05668e0cc32-config\") pod \"kube-controller-manager-operator-78b949d7b-gns66\" (UID: \"939a734d-ecb7-43f1-a7be-e05668e0cc32\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703429 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/59b10596-74c6-4b9f-aaa9-69d60015a048-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703491 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703516 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6a21c7c7-1f54-4cfe-af87-11e397fead60-profile-collector-cert\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703551 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-policies\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703587 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703617 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a21c7c7-1f54-4cfe-af87-11e397fead60-srv-cert\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703638 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a373c4-7ca5-4e3e-91a7-4dae1241a7fb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4npfs\" (UID: \"86a373c4-7ca5-4e3e-91a7-4dae1241a7fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703662 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ab44990-cfce-4e00-8566-b8902400d263-proxy-tls\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703693 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939a734d-ecb7-43f1-a7be-e05668e0cc32-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gns66\" (UID: \"939a734d-ecb7-43f1-a7be-e05668e0cc32\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703698 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-config\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703730 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/316090ee-bdeb-4d02-aee1-6734a421c126-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-khjz7\" (UID: \"316090ee-bdeb-4d02-aee1-6734a421c126\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703749 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95ce7326-f487-4c1e-80e9-cc39e4af2708-audit-dir\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703838 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703863 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7cca742-7184-431c-b480-273f5fbe6dba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ctn57\" (UID: \"d7cca742-7184-431c-b480-273f5fbe6dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703914 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmx8t\" (UniqueName: \"kubernetes.io/projected/61db3a36-06c6-43f4-b78c-90dbb61eb095-kube-api-access-zmx8t\") pod \"cluster-samples-operator-665b6dd947-x44c7\" (UID: \"61db3a36-06c6-43f4-b78c-90dbb61eb095\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703937 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-service-ca\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703957 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv8mn\" (UniqueName: \"kubernetes.io/projected/bdeead84-dfe5-482c-af20-6bad5984c7bf-kube-api-access-bv8mn\") pod \"dns-operator-744455d44c-xx59g\" (UID: \"bdeead84-dfe5-482c-af20-6bad5984c7bf\") " pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.703993 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbthd\" (UniqueName: \"kubernetes.io/projected/57f8716f-6b5d-4f01-9341-674dba56876a-kube-api-access-kbthd\") pod \"olm-operator-6b444d44fb-m6p9n\" (UID: \"57f8716f-6b5d-4f01-9341-674dba56876a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.704047 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e86e67-dde6-4b7c-883b-ce22eb444299-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wffb2\" (UID: \"76e86e67-dde6-4b7c-883b-ce22eb444299\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.704142 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f0c30e2-825b-40cc-8c89-f454853ded08-trusted-ca\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.704671 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e86e67-dde6-4b7c-883b-ce22eb444299-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wffb2\" (UID: \"76e86e67-dde6-4b7c-883b-ce22eb444299\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.704706 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2c6e35c8-2541-40a3-8d9e-de756d5b821a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q4j7p\" (UID: \"2c6e35c8-2541-40a3-8d9e-de756d5b821a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.704728 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4z2\" (UniqueName: \"kubernetes.io/projected/f36bde77-88b0-46fb-b33d-85c7c430ab11-kube-api-access-dx4z2\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.704765 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/79af8363-2911-45e0-9b07-3421b2626de0-encryption-config\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.704794 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-proxy-tls\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.704912 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-service-ca\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.705043 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bdeead84-dfe5-482c-af20-6bad5984c7bf-metrics-tls\") pod \"dns-operator-744455d44c-xx59g\" (UID: \"bdeead84-dfe5-482c-af20-6bad5984c7bf\") " pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.705117 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95ce7326-f487-4c1e-80e9-cc39e4af2708-etcd-client\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.705140 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79af8363-2911-45e0-9b07-3421b2626de0-serving-cert\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.705209 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj5vw\" (UniqueName: \"kubernetes.io/projected/a333682a-02d1-4a1e-900e-7a78e7b67317-kube-api-access-cj5vw\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.705255 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95ce7326-f487-4c1e-80e9-cc39e4af2708-audit-policies\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.705281 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57f8716f-6b5d-4f01-9341-674dba56876a-srv-cert\") pod \"olm-operator-6b444d44fb-m6p9n\" (UID: \"57f8716f-6b5d-4f01-9341-674dba56876a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.705739 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fe6389-fa84-4ec6-8891-0a379e0d4f29-serving-cert\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.706156 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e86e67-dde6-4b7c-883b-ce22eb444299-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wffb2\" (UID: \"76e86e67-dde6-4b7c-883b-ce22eb444299\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.706255 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-machine-approver-tls\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.706303 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b10596-74c6-4b9f-aaa9-69d60015a048-config\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.706343 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqcnw\" (UniqueName: \"kubernetes.io/projected/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-kube-api-access-jqcnw\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.706674 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95ce7326-f487-4c1e-80e9-cc39e4af2708-audit-policies\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.706912 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/79af8363-2911-45e0-9b07-3421b2626de0-encryption-config\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.706949 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b10596-74c6-4b9f-aaa9-69d60015a048-config\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.706977 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707024 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a71431a-c15f-457e-9058-577e362c8f8a-apiservice-cert\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707085 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707158 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/79af8363-2911-45e0-9b07-3421b2626de0-audit-dir\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707182 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkkf5\" (UniqueName: \"kubernetes.io/projected/5ab44990-cfce-4e00-8566-b8902400d263-kube-api-access-vkkf5\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707211 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/79af8363-2911-45e0-9b07-3421b2626de0-node-pullsecrets\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707229 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ab44990-cfce-4e00-8566-b8902400d263-images\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707241 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/79af8363-2911-45e0-9b07-3421b2626de0-audit-dir\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707246 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95ce7326-f487-4c1e-80e9-cc39e4af2708-serving-cert\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707282 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzx5t\" (UniqueName: \"kubernetes.io/projected/79af8363-2911-45e0-9b07-3421b2626de0-kube-api-access-kzx5t\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707301 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f0c30e2-825b-40cc-8c89-f454853ded08-bound-sa-token\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707321 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95ce7326-f487-4c1e-80e9-cc39e4af2708-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707338 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tpvf\" (UniqueName: \"kubernetes.io/projected/59b10596-74c6-4b9f-aaa9-69d60015a048-kube-api-access-7tpvf\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707353 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-oauth-config\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707371 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq65s\" (UniqueName: \"kubernetes.io/projected/b3164eb7-c85e-4eaa-8318-b887832da2e5-kube-api-access-cq65s\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707388 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a333682a-02d1-4a1e-900e-7a78e7b67317-service-ca-bundle\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707404 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a71431a-c15f-457e-9058-577e362c8f8a-tmpfs\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707440 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707467 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-audit\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707487 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95ce7326-f487-4c1e-80e9-cc39e4af2708-encryption-config\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707526 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95ce7326-f487-4c1e-80e9-cc39e4af2708-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707548 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3164eb7-c85e-4eaa-8318-b887832da2e5-trusted-ca\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707551 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/59b10596-74c6-4b9f-aaa9-69d60015a048-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707569 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ab44990-cfce-4e00-8566-b8902400d263-auth-proxy-config\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707592 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7cca742-7184-431c-b480-273f5fbe6dba-config\") pod \"kube-apiserver-operator-766d6c64bb-ctn57\" (UID: \"d7cca742-7184-431c-b480-273f5fbe6dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707615 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79af8363-2911-45e0-9b07-3421b2626de0-etcd-client\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707697 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/79af8363-2911-45e0-9b07-3421b2626de0-node-pullsecrets\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.707944 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-serving-cert\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.708339 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/95ce7326-f487-4c1e-80e9-cc39e4af2708-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.708398 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/79af8363-2911-45e0-9b07-3421b2626de0-audit\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.709228 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bdeead84-dfe5-482c-af20-6bad5984c7bf-metrics-tls\") pod \"dns-operator-744455d44c-xx59g\" (UID: \"bdeead84-dfe5-482c-af20-6bad5984c7bf\") " pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.709302 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a333682a-02d1-4a1e-900e-7a78e7b67317-service-ca-bundle\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.709561 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95ce7326-f487-4c1e-80e9-cc39e4af2708-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.709636 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-machine-approver-tls\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.710064 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95ce7326-f487-4c1e-80e9-cc39e4af2708-serving-cert\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.710162 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-oauth-config\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.710282 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3164eb7-c85e-4eaa-8318-b887832da2e5-trusted-ca\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.710402 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48b50d9-30fa-455f-b3b3-5c781089871f-serving-cert\") pod \"openshift-config-operator-7777fb866f-n6tkj\" (UID: \"d48b50d9-30fa-455f-b3b3-5c781089871f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.711018 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/95ce7326-f487-4c1e-80e9-cc39e4af2708-encryption-config\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.711197 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95ce7326-f487-4c1e-80e9-cc39e4af2708-etcd-client\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.711217 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79af8363-2911-45e0-9b07-3421b2626de0-serving-cert\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.711845 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/61db3a36-06c6-43f4-b78c-90dbb61eb095-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x44c7\" (UID: \"61db3a36-06c6-43f4-b78c-90dbb61eb095\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.712723 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a333682a-02d1-4a1e-900e-7a78e7b67317-serving-cert\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.712806 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79af8363-2911-45e0-9b07-3421b2626de0-etcd-client\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.715929 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.734933 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.754866 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.775035 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.796044 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.809173 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.809319 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a71431a-c15f-457e-9058-577e362c8f8a-apiservice-cert\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.809423 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ab44990-cfce-4e00-8566-b8902400d263-images\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.809531 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkkf5\" (UniqueName: \"kubernetes.io/projected/5ab44990-cfce-4e00-8566-b8902400d263-kube-api-access-vkkf5\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.809639 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f0c30e2-825b-40cc-8c89-f454853ded08-bound-sa-token\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.809946 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a71431a-c15f-457e-9058-577e362c8f8a-tmpfs\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.810078 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.810194 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ab44990-cfce-4e00-8566-b8902400d263-auth-proxy-config\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.810914 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7cca742-7184-431c-b480-273f5fbe6dba-config\") pod \"kube-apiserver-operator-766d6c64bb-ctn57\" (UID: \"d7cca742-7184-431c-b480-273f5fbe6dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811101 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f0c30e2-825b-40cc-8c89-f454853ded08-metrics-tls\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811229 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-dir\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811352 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6ns\" (UniqueName: \"kubernetes.io/projected/86a373c4-7ca5-4e3e-91a7-4dae1241a7fb-kube-api-access-fv6ns\") pod \"package-server-manager-789f6589d5-4npfs\" (UID: \"86a373c4-7ca5-4e3e-91a7-4dae1241a7fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.810857 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ab44990-cfce-4e00-8566-b8902400d263-auth-proxy-config\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811539 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l8bx\" (UniqueName: \"kubernetes.io/projected/8a71431a-c15f-457e-9058-577e362c8f8a-kube-api-access-4l8bx\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811329 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-dir\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811669 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjwtq\" (UniqueName: \"kubernetes.io/projected/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-kube-api-access-jjwtq\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811712 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a71431a-c15f-457e-9058-577e362c8f8a-tmpfs\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811738 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7p8p\" (UniqueName: \"kubernetes.io/projected/316090ee-bdeb-4d02-aee1-6734a421c126-kube-api-access-f7p8p\") pod \"control-plane-machine-set-operator-78cbb6b69f-khjz7\" (UID: \"316090ee-bdeb-4d02-aee1-6734a421c126\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811790 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2fsq\" (UniqueName: \"kubernetes.io/projected/2c6e35c8-2541-40a3-8d9e-de756d5b821a-kube-api-access-n2fsq\") pod \"multus-admission-controller-857f4d67dd-q4j7p\" (UID: \"2c6e35c8-2541-40a3-8d9e-de756d5b821a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811810 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939a734d-ecb7-43f1-a7be-e05668e0cc32-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gns66\" (UID: \"939a734d-ecb7-43f1-a7be-e05668e0cc32\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811846 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811891 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf25k\" (UniqueName: \"kubernetes.io/projected/6a21c7c7-1f54-4cfe-af87-11e397fead60-kube-api-access-hf25k\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811917 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a71431a-c15f-457e-9058-577e362c8f8a-webhook-cert\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811942 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811962 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.811981 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812000 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7cca742-7184-431c-b480-273f5fbe6dba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ctn57\" (UID: \"d7cca742-7184-431c-b480-273f5fbe6dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812017 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812034 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/57f8716f-6b5d-4f01-9341-674dba56876a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m6p9n\" (UID: \"57f8716f-6b5d-4f01-9341-674dba56876a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812054 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812071 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zjx\" (UniqueName: \"kubernetes.io/projected/6f0c30e2-825b-40cc-8c89-f454853ded08-kube-api-access-q9zjx\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812087 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812106 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939a734d-ecb7-43f1-a7be-e05668e0cc32-config\") pod \"kube-controller-manager-operator-78b949d7b-gns66\" (UID: \"939a734d-ecb7-43f1-a7be-e05668e0cc32\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812149 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812164 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6a21c7c7-1f54-4cfe-af87-11e397fead60-profile-collector-cert\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812179 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-policies\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812198 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812213 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a21c7c7-1f54-4cfe-af87-11e397fead60-srv-cert\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812227 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ab44990-cfce-4e00-8566-b8902400d263-proxy-tls\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812244 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a373c4-7ca5-4e3e-91a7-4dae1241a7fb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4npfs\" (UID: \"86a373c4-7ca5-4e3e-91a7-4dae1241a7fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812265 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939a734d-ecb7-43f1-a7be-e05668e0cc32-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gns66\" (UID: \"939a734d-ecb7-43f1-a7be-e05668e0cc32\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812289 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/316090ee-bdeb-4d02-aee1-6734a421c126-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-khjz7\" (UID: \"316090ee-bdeb-4d02-aee1-6734a421c126\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812307 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812325 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7cca742-7184-431c-b480-273f5fbe6dba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ctn57\" (UID: \"d7cca742-7184-431c-b480-273f5fbe6dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812351 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbthd\" (UniqueName: \"kubernetes.io/projected/57f8716f-6b5d-4f01-9341-674dba56876a-kube-api-access-kbthd\") pod \"olm-operator-6b444d44fb-m6p9n\" (UID: \"57f8716f-6b5d-4f01-9341-674dba56876a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812370 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f0c30e2-825b-40cc-8c89-f454853ded08-trusted-ca\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812386 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2c6e35c8-2541-40a3-8d9e-de756d5b821a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q4j7p\" (UID: \"2c6e35c8-2541-40a3-8d9e-de756d5b821a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812403 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4z2\" (UniqueName: \"kubernetes.io/projected/f36bde77-88b0-46fb-b33d-85c7c430ab11-kube-api-access-dx4z2\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812420 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-proxy-tls\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812443 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57f8716f-6b5d-4f01-9341-674dba56876a-srv-cert\") pod \"olm-operator-6b444d44fb-m6p9n\" (UID: \"57f8716f-6b5d-4f01-9341-674dba56876a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.812762 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.813503 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f0c30e2-825b-40cc-8c89-f454853ded08-trusted-ca\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.813539 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939a734d-ecb7-43f1-a7be-e05668e0cc32-config\") pod \"kube-controller-manager-operator-78b949d7b-gns66\" (UID: \"939a734d-ecb7-43f1-a7be-e05668e0cc32\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.815735 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.815994 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f0c30e2-825b-40cc-8c89-f454853ded08-metrics-tls\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.815804 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/316090ee-bdeb-4d02-aee1-6734a421c126-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-khjz7\" (UID: \"316090ee-bdeb-4d02-aee1-6734a421c126\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.818239 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/939a734d-ecb7-43f1-a7be-e05668e0cc32-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gns66\" (UID: \"939a734d-ecb7-43f1-a7be-e05668e0cc32\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.834698 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.857524 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.874796 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.895268 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.915199 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.934894 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.942427 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7cca742-7184-431c-b480-273f5fbe6dba-config\") pod \"kube-apiserver-operator-766d6c64bb-ctn57\" (UID: \"d7cca742-7184-431c-b480-273f5fbe6dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.955453 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.974607 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.986326 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7cca742-7184-431c-b480-273f5fbe6dba-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ctn57\" (UID: \"d7cca742-7184-431c-b480-273f5fbe6dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:36 crc kubenswrapper[4880]: I1201 02:58:36.994579 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.014977 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.035772 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.055147 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.075959 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.101905 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.115667 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.136672 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.146327 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.155548 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.165750 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.182487 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.195572 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.198049 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.206026 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.216115 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.222522 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.236001 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.272894 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.276989 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.285282 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.286486 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.315908 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.317761 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:37 crc kubenswrapper[4880]: E1201 02:58:37.318164 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 03:00:39.318127133 +0000 UTC m=+268.829381545 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.336390 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.343642 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-policies\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.355233 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.364533 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.375936 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.383758 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.402150 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.405146 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.415325 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.420473 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.420535 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.420695 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.420834 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.422325 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.424145 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.424710 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.426484 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.437053 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.445653 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/57f8716f-6b5d-4f01-9341-674dba56876a-srv-cert\") pod \"olm-operator-6b444d44fb-m6p9n\" (UID: \"57f8716f-6b5d-4f01-9341-674dba56876a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.455624 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.476369 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.486025 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/57f8716f-6b5d-4f01-9341-674dba56876a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m6p9n\" (UID: \"57f8716f-6b5d-4f01-9341-674dba56876a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.487749 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6a21c7c7-1f54-4cfe-af87-11e397fead60-profile-collector-cert\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.496241 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.506004 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.507761 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.516139 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.522581 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.532435 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.538362 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.547647 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a373c4-7ca5-4e3e-91a7-4dae1241a7fb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4npfs\" (UID: \"86a373c4-7ca5-4e3e-91a7-4dae1241a7fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.556097 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.574369 4880 request.go:700] Waited for 1.003817336s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0 Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.576672 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.596733 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.618371 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.635629 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.647098 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a71431a-c15f-457e-9058-577e362c8f8a-webhook-cert\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.656161 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a71431a-c15f-457e-9058-577e362c8f8a-apiservice-cert\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.657081 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.661990 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5ab44990-cfce-4e00-8566-b8902400d263-images\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.676787 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.695341 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.715753 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.728007 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ab44990-cfce-4e00-8566-b8902400d263-proxy-tls\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.738028 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.754919 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.775976 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.800617 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 02:58:37 crc kubenswrapper[4880]: E1201 02:58:37.813437 4880 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 02:58:37 crc kubenswrapper[4880]: E1201 02:58:37.813466 4880 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Dec 01 02:58:37 crc kubenswrapper[4880]: E1201 02:58:37.813515 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a21c7c7-1f54-4cfe-af87-11e397fead60-srv-cert podName:6a21c7c7-1f54-4cfe-af87-11e397fead60 nodeName:}" failed. No retries permitted until 2025-12-01 02:58:38.313490294 +0000 UTC m=+147.824744666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6a21c7c7-1f54-4cfe-af87-11e397fead60-srv-cert") pod "catalog-operator-68c6474976-cqg4c" (UID: "6a21c7c7-1f54-4cfe-af87-11e397fead60") : failed to sync secret cache: timed out waiting for the condition Dec 01 02:58:37 crc kubenswrapper[4880]: E1201 02:58:37.813547 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c6e35c8-2541-40a3-8d9e-de756d5b821a-webhook-certs podName:2c6e35c8-2541-40a3-8d9e-de756d5b821a nodeName:}" failed. No retries permitted until 2025-12-01 02:58:38.313524974 +0000 UTC m=+147.824779346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2c6e35c8-2541-40a3-8d9e-de756d5b821a-webhook-certs") pod "multus-admission-controller-857f4d67dd-q4j7p" (UID: "2c6e35c8-2541-40a3-8d9e-de756d5b821a") : failed to sync secret cache: timed out waiting for the condition Dec 01 02:58:37 crc kubenswrapper[4880]: E1201 02:58:37.813572 4880 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 01 02:58:37 crc kubenswrapper[4880]: E1201 02:58:37.813599 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-proxy-tls podName:8550af07-b9df-4fcf-bdd9-7c282f1f4e88 nodeName:}" failed. No retries permitted until 2025-12-01 02:58:38.313590456 +0000 UTC m=+147.824844828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-proxy-tls") pod "machine-config-controller-84d6567774-gmlbx" (UID: "8550af07-b9df-4fcf-bdd9-7c282f1f4e88") : failed to sync secret cache: timed out waiting for the condition Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.815835 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.834829 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.855688 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.875849 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.900639 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.915724 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.938417 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.955813 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 02:58:37 crc kubenswrapper[4880]: I1201 02:58:37.975795 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.004077 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 02:58:38 crc kubenswrapper[4880]: W1201 02:58:38.008826 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-94e46944a42a0099837180bc9190e4438ba31a9f5295918f869392a16372ec8a WatchSource:0}: Error finding container 94e46944a42a0099837180bc9190e4438ba31a9f5295918f869392a16372ec8a: Status 404 returned error can't find the container with id 94e46944a42a0099837180bc9190e4438ba31a9f5295918f869392a16372ec8a Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.016990 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.035718 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.055828 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.075224 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.096285 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.116491 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.135652 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.156522 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.176135 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.221046 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srvmz\" (UniqueName: \"kubernetes.io/projected/7ed088c8-ac9c-4a17-940c-4ebcb22be231-kube-api-access-srvmz\") pod \"openshift-apiserver-operator-796bbdcf4f-tt724\" (UID: \"7ed088c8-ac9c-4a17-940c-4ebcb22be231\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.241260 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8m6j\" (UniqueName: \"kubernetes.io/projected/183bf44d-c621-4f91-8ddc-10093cfc2596-kube-api-access-q8m6j\") pod \"controller-manager-879f6c89f-mxnkp\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.242111 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.273939 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vsdf\" (UniqueName: \"kubernetes.io/projected/865e44df-b483-40e5-9a4f-d78fce50d532-kube-api-access-8vsdf\") pod \"downloads-7954f5f757-qb4wv\" (UID: \"865e44df-b483-40e5-9a4f-d78fce50d532\") " pod="openshift-console/downloads-7954f5f757-qb4wv" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.282343 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzb7d\" (UniqueName: \"kubernetes.io/projected/3621b23f-4e41-4a02-b456-7206682db44f-kube-api-access-xzb7d\") pod \"etcd-operator-b45778765-2br2c\" (UID: \"3621b23f-4e41-4a02-b456-7206682db44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.295778 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.311756 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.315566 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.337254 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2c6e35c8-2541-40a3-8d9e-de756d5b821a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q4j7p\" (UID: \"2c6e35c8-2541-40a3-8d9e-de756d5b821a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.337329 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-proxy-tls\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.337812 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a21c7c7-1f54-4cfe-af87-11e397fead60-srv-cert\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.341974 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6a21c7c7-1f54-4cfe-af87-11e397fead60-srv-cert\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.342640 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.346559 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-proxy-tls\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.348443 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2c6e35c8-2541-40a3-8d9e-de756d5b821a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-q4j7p\" (UID: \"2c6e35c8-2541-40a3-8d9e-de756d5b821a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.366404 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.375811 4880 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.396100 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.415996 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.429723 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724"] Dec 01 02:58:38 crc kubenswrapper[4880]: W1201 02:58:38.434991 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ed088c8_ac9c_4a17_940c_4ebcb22be231.slice/crio-df4a1c0c39c8e06d6965211d9a4c78cee1fb7f0a305e97f323d4887919718a3b WatchSource:0}: Error finding container df4a1c0c39c8e06d6965211d9a4c78cee1fb7f0a305e97f323d4887919718a3b: Status 404 returned error can't find the container with id df4a1c0c39c8e06d6965211d9a4c78cee1fb7f0a305e97f323d4887919718a3b Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.436754 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.455730 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.475685 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.494837 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.498531 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mxnkp"] Dec 01 02:58:38 crc kubenswrapper[4880]: W1201 02:58:38.504881 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod183bf44d_c621_4f91_8ddc_10093cfc2596.slice/crio-ecc4a663b506eb4383c591ccb7e5ca8e973bc94a62ef77074ba39eb74adec941 WatchSource:0}: Error finding container ecc4a663b506eb4383c591ccb7e5ca8e973bc94a62ef77074ba39eb74adec941: Status 404 returned error can't find the container with id ecc4a663b506eb4383c591ccb7e5ca8e973bc94a62ef77074ba39eb74adec941 Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.513291 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.514621 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.523588 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qb4wv" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.538283 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.576598 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbf8g\" (UniqueName: \"kubernetes.io/projected/95ce7326-f487-4c1e-80e9-cc39e4af2708-kube-api-access-jbf8g\") pod \"apiserver-7bbb656c7d-bcrdd\" (UID: \"95ce7326-f487-4c1e-80e9-cc39e4af2708\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.590010 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6df9r\" (UniqueName: \"kubernetes.io/projected/14fe6389-fa84-4ec6-8891-0a379e0d4f29-kube-api-access-6df9r\") pod \"route-controller-manager-6576b87f9c-8cw6k\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.595485 4880 request.go:700] Waited for 1.896892568s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.640987 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rz5w\" (UniqueName: \"kubernetes.io/projected/8bea9ed3-3bfc-4f97-bf60-d544e791e5f5-kube-api-access-4rz5w\") pod \"machine-approver-56656f9798-bbwxx\" (UID: \"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.665533 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsvnp\" (UniqueName: \"kubernetes.io/projected/76e86e67-dde6-4b7c-883b-ce22eb444299-kube-api-access-jsvnp\") pod \"openshift-controller-manager-operator-756b6f6bc6-wffb2\" (UID: \"76e86e67-dde6-4b7c-883b-ce22eb444299\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.669883 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d0107dee482597c5bc35f598ad7c20e385dbf7139e3a6d89f86208f014fe3981"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.669914 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8939c9fb6d003aff9a8fc0d4ba558fddf7b894fe230eb848156e5f9c5cfe2c88"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.672599 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78lns\" (UniqueName: \"kubernetes.io/projected/d48b50d9-30fa-455f-b3b3-5c781089871f-kube-api-access-78lns\") pod \"openshift-config-operator-7777fb866f-n6tkj\" (UID: \"d48b50d9-30fa-455f-b3b3-5c781089871f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.674127 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" event={"ID":"7ed088c8-ac9c-4a17-940c-4ebcb22be231","Type":"ContainerStarted","Data":"c26a221b15e6c18ab138479b5c5db734698651ff5c3052ba5f0248060151a33a"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.674229 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" event={"ID":"7ed088c8-ac9c-4a17-940c-4ebcb22be231","Type":"ContainerStarted","Data":"df4a1c0c39c8e06d6965211d9a4c78cee1fb7f0a305e97f323d4887919718a3b"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.676509 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"60266ec377f3b1dfa268ee6f6c7453d86aaf6e485065933bb5d99b131d26f9e4"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.676536 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0a5f3264ce5d714244c4f3352507b77fef03b7d9b6043a934db9c4038046b98f"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.682786 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqtks\" (UniqueName: \"kubernetes.io/projected/747403d3-576b-4621-8cb3-b9122348ec98-kube-api-access-pqtks\") pod \"console-f9d7485db-qcvrn\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.683593 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e1a579d3eafd5be254be6ca1540bb1d0b4280de3652ccd5b7be151d3985e86bc"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.683617 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"94e46944a42a0099837180bc9190e4438ba31a9f5295918f869392a16372ec8a"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.683960 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.686673 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" event={"ID":"183bf44d-c621-4f91-8ddc-10093cfc2596","Type":"ContainerStarted","Data":"de9e0d5b52caf3a2fbf0feface417552680ba41db0b06aa98b13d3f0a6011811"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.686715 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" event={"ID":"183bf44d-c621-4f91-8ddc-10093cfc2596","Type":"ContainerStarted","Data":"ecc4a663b506eb4383c591ccb7e5ca8e973bc94a62ef77074ba39eb74adec941"} Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.687301 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.688457 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmx8t\" (UniqueName: \"kubernetes.io/projected/61db3a36-06c6-43f4-b78c-90dbb61eb095-kube-api-access-zmx8t\") pod \"cluster-samples-operator-665b6dd947-x44c7\" (UID: \"61db3a36-06c6-43f4-b78c-90dbb61eb095\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.690134 4880 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mxnkp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.690180 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" podUID="183bf44d-c621-4f91-8ddc-10093cfc2596" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.697881 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.711418 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv8mn\" (UniqueName: \"kubernetes.io/projected/bdeead84-dfe5-482c-af20-6bad5984c7bf-kube-api-access-bv8mn\") pod \"dns-operator-744455d44c-xx59g\" (UID: \"bdeead84-dfe5-482c-af20-6bad5984c7bf\") " pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.727405 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.732160 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj5vw\" (UniqueName: \"kubernetes.io/projected/a333682a-02d1-4a1e-900e-7a78e7b67317-kube-api-access-cj5vw\") pod \"authentication-operator-69f744f599-vmhg8\" (UID: \"a333682a-02d1-4a1e-900e-7a78e7b67317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.733660 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2br2c"] Dec 01 02:58:38 crc kubenswrapper[4880]: W1201 02:58:38.739345 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3621b23f_4e41_4a02_b456_7206682db44f.slice/crio-4d1c41cf04a0de935da50645d277b094a825b390118e92f0e08006ca01b787c3 WatchSource:0}: Error finding container 4d1c41cf04a0de935da50645d277b094a825b390118e92f0e08006ca01b787c3: Status 404 returned error can't find the container with id 4d1c41cf04a0de935da50645d277b094a825b390118e92f0e08006ca01b787c3 Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.751213 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.755133 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqcnw\" (UniqueName: \"kubernetes.io/projected/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-kube-api-access-jqcnw\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.771389 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.771520 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4630aaa6-b6fb-4636-b2b6-4ceb52375b04-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fj5rb\" (UID: \"4630aaa6-b6fb-4636-b2b6-4ceb52375b04\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.773363 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qb4wv"] Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.784270 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.791133 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.800141 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.805461 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tpvf\" (UniqueName: \"kubernetes.io/projected/59b10596-74c6-4b9f-aaa9-69d60015a048-kube-api-access-7tpvf\") pod \"machine-api-operator-5694c8668f-d8pwf\" (UID: \"59b10596-74c6-4b9f-aaa9-69d60015a048\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.831843 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzx5t\" (UniqueName: \"kubernetes.io/projected/79af8363-2911-45e0-9b07-3421b2626de0-kube-api-access-kzx5t\") pod \"apiserver-76f77b778f-tzxfr\" (UID: \"79af8363-2911-45e0-9b07-3421b2626de0\") " pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.841467 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.842586 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq65s\" (UniqueName: \"kubernetes.io/projected/b3164eb7-c85e-4eaa-8318-b887832da2e5-kube-api-access-cq65s\") pod \"console-operator-58897d9998-sdrzn\" (UID: \"b3164eb7-c85e-4eaa-8318-b887832da2e5\") " pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.847893 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkkf5\" (UniqueName: \"kubernetes.io/projected/5ab44990-cfce-4e00-8566-b8902400d263-kube-api-access-vkkf5\") pod \"machine-config-operator-74547568cd-twglb\" (UID: \"5ab44990-cfce-4e00-8566-b8902400d263\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.849062 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.868585 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f0c30e2-825b-40cc-8c89-f454853ded08-bound-sa-token\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.895382 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6ns\" (UniqueName: \"kubernetes.io/projected/86a373c4-7ca5-4e3e-91a7-4dae1241a7fb-kube-api-access-fv6ns\") pod \"package-server-manager-789f6589d5-4npfs\" (UID: \"86a373c4-7ca5-4e3e-91a7-4dae1241a7fb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.919641 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l8bx\" (UniqueName: \"kubernetes.io/projected/8a71431a-c15f-457e-9058-577e362c8f8a-kube-api-access-4l8bx\") pod \"packageserver-d55dfcdfc-v8mf4\" (UID: \"8a71431a-c15f-457e-9058-577e362c8f8a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.929822 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7p8p\" (UniqueName: \"kubernetes.io/projected/316090ee-bdeb-4d02-aee1-6734a421c126-kube-api-access-f7p8p\") pod \"control-plane-machine-set-operator-78cbb6b69f-khjz7\" (UID: \"316090ee-bdeb-4d02-aee1-6734a421c126\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.936668 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.943000 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.954158 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.962953 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k"] Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.966672 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939a734d-ecb7-43f1-a7be-e05668e0cc32-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gns66\" (UID: \"939a734d-ecb7-43f1-a7be-e05668e0cc32\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.972380 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.981259 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.986326 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7cca742-7184-431c-b480-273f5fbe6dba-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ctn57\" (UID: \"d7cca742-7184-431c-b480-273f5fbe6dba\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:38 crc kubenswrapper[4880]: I1201 02:58:38.993682 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.000496 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf25k\" (UniqueName: \"kubernetes.io/projected/6a21c7c7-1f54-4cfe-af87-11e397fead60-kube-api-access-hf25k\") pod \"catalog-operator-68c6474976-cqg4c\" (UID: \"6a21c7c7-1f54-4cfe-af87-11e397fead60\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.013606 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zjx\" (UniqueName: \"kubernetes.io/projected/6f0c30e2-825b-40cc-8c89-f454853ded08-kube-api-access-q9zjx\") pod \"ingress-operator-5b745b69d9-htspn\" (UID: \"6f0c30e2-825b-40cc-8c89-f454853ded08\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.053421 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbthd\" (UniqueName: \"kubernetes.io/projected/57f8716f-6b5d-4f01-9341-674dba56876a-kube-api-access-kbthd\") pod \"olm-operator-6b444d44fb-m6p9n\" (UID: \"57f8716f-6b5d-4f01-9341-674dba56876a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.060281 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4z2\" (UniqueName: \"kubernetes.io/projected/f36bde77-88b0-46fb-b33d-85c7c430ab11-kube-api-access-dx4z2\") pod \"oauth-openshift-558db77b4-vgdhn\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.073501 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjwtq\" (UniqueName: \"kubernetes.io/projected/8550af07-b9df-4fcf-bdd9-7c282f1f4e88-kube-api-access-jjwtq\") pod \"machine-config-controller-84d6567774-gmlbx\" (UID: \"8550af07-b9df-4fcf-bdd9-7c282f1f4e88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.079233 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.101321 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2fsq\" (UniqueName: \"kubernetes.io/projected/2c6e35c8-2541-40a3-8d9e-de756d5b821a-kube-api-access-n2fsq\") pod \"multus-admission-controller-857f4d67dd-q4j7p\" (UID: \"2c6e35c8-2541-40a3-8d9e-de756d5b821a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150003 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/12c4c6bc-892f-4695-ac59-4a930d0a8925-signing-cabundle\") pod \"service-ca-9c57cc56f-tt6fz\" (UID: \"12c4c6bc-892f-4695-ac59-4a930d0a8925\") " pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150306 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5ece886a-bdc2-4c08-b6a8-4fd522409dee-default-certificate\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150339 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5ece886a-bdc2-4c08-b6a8-4fd522409dee-stats-auth\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150381 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxzk8\" (UniqueName: \"kubernetes.io/projected/ea1b23cc-880d-4f32-a077-daac8279716a-kube-api-access-zxzk8\") pod \"service-ca-operator-777779d784-hqpsf\" (UID: \"ea1b23cc-880d-4f32-a077-daac8279716a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150407 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-bound-sa-token\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150442 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc63ffe-2f3f-4805-b13b-8da24a393826-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d5h9k\" (UID: \"9cc63ffe-2f3f-4805-b13b-8da24a393826\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150458 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lf8k7\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150480 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc63ffe-2f3f-4805-b13b-8da24a393826-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d5h9k\" (UID: \"9cc63ffe-2f3f-4805-b13b-8da24a393826\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150499 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4766bca-6e7e-4ce9-acdb-ca266883540c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8xnpm\" (UID: \"d4766bca-6e7e-4ce9-acdb-ca266883540c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150535 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea1b23cc-880d-4f32-a077-daac8279716a-config\") pod \"service-ca-operator-777779d784-hqpsf\" (UID: \"ea1b23cc-880d-4f32-a077-daac8279716a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150554 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-trusted-ca\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150572 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lf8k7\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150616 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkjf\" (UniqueName: \"kubernetes.io/projected/d9fd9260-cfde-4ec1-8b3c-c757712369d6-kube-api-access-pfkjf\") pod \"collect-profiles-29409285-9cplb\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150644 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4766bca-6e7e-4ce9-acdb-ca266883540c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8xnpm\" (UID: \"d4766bca-6e7e-4ce9-acdb-ca266883540c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150677 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-registry-tls\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150695 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhsb\" (UniqueName: \"kubernetes.io/projected/fd0d64d0-7952-425c-95d5-5180ed5f588c-kube-api-access-2lhsb\") pod \"marketplace-operator-79b997595-lf8k7\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150716 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74dde675-4516-4165-badb-d7233a017fe1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150733 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzt4x\" (UniqueName: \"kubernetes.io/projected/9cc63ffe-2f3f-4805-b13b-8da24a393826-kube-api-access-bzt4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-d5h9k\" (UID: \"9cc63ffe-2f3f-4805-b13b-8da24a393826\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150765 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9fd9260-cfde-4ec1-8b3c-c757712369d6-secret-volume\") pod \"collect-profiles-29409285-9cplb\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150810 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/12c4c6bc-892f-4695-ac59-4a930d0a8925-signing-key\") pod \"service-ca-9c57cc56f-tt6fz\" (UID: \"12c4c6bc-892f-4695-ac59-4a930d0a8925\") " pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150843 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7slm\" (UniqueName: \"kubernetes.io/projected/12c4c6bc-892f-4695-ac59-4a930d0a8925-kube-api-access-s7slm\") pod \"service-ca-9c57cc56f-tt6fz\" (UID: \"12c4c6bc-892f-4695-ac59-4a930d0a8925\") " pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150860 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5pj\" (UniqueName: \"kubernetes.io/projected/b9f53556-00ef-4ba6-a73b-880533578d2e-kube-api-access-hr5pj\") pod \"migrator-59844c95c7-v8q2n\" (UID: \"b9f53556-00ef-4ba6-a73b-880533578d2e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150919 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr29x\" (UniqueName: \"kubernetes.io/projected/5ece886a-bdc2-4c08-b6a8-4fd522409dee-kube-api-access-jr29x\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150936 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74dde675-4516-4165-badb-d7233a017fe1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.150959 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4766bca-6e7e-4ce9-acdb-ca266883540c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8xnpm\" (UID: \"d4766bca-6e7e-4ce9-acdb-ca266883540c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.151035 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ece886a-bdc2-4c08-b6a8-4fd522409dee-service-ca-bundle\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.151090 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.151113 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd9260-cfde-4ec1-8b3c-c757712369d6-config-volume\") pod \"collect-profiles-29409285-9cplb\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.151151 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94zx2\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-kube-api-access-94zx2\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.151191 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-registry-certificates\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.151305 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea1b23cc-880d-4f32-a077-daac8279716a-serving-cert\") pod \"service-ca-operator-777779d784-hqpsf\" (UID: \"ea1b23cc-880d-4f32-a077-daac8279716a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.151348 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ece886a-bdc2-4c08-b6a8-4fd522409dee-metrics-certs\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.151939 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:39.651925183 +0000 UTC m=+149.163179545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: W1201 02:58:39.165307 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14fe6389_fa84_4ec6_8891_0a379e0d4f29.slice/crio-23c26be5f123e5387e0cdfc0ef757de41c08d7543779bdeadcb8f7acd7c16b0e WatchSource:0}: Error finding container 23c26be5f123e5387e0cdfc0ef757de41c08d7543779bdeadcb8f7acd7c16b0e: Status 404 returned error can't find the container with id 23c26be5f123e5387e0cdfc0ef757de41c08d7543779bdeadcb8f7acd7c16b0e Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.165591 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.170472 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.180161 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.199111 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.201072 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xx59g"] Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.210129 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.218668 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.253652 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.253854 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5ece886a-bdc2-4c08-b6a8-4fd522409dee-stats-auth\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.253905 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxzk8\" (UniqueName: \"kubernetes.io/projected/ea1b23cc-880d-4f32-a077-daac8279716a-kube-api-access-zxzk8\") pod \"service-ca-operator-777779d784-hqpsf\" (UID: \"ea1b23cc-880d-4f32-a077-daac8279716a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.253931 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-bound-sa-token\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.253976 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc63ffe-2f3f-4805-b13b-8da24a393826-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d5h9k\" (UID: \"9cc63ffe-2f3f-4805-b13b-8da24a393826\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254000 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lf8k7\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254014 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc63ffe-2f3f-4805-b13b-8da24a393826-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d5h9k\" (UID: \"9cc63ffe-2f3f-4805-b13b-8da24a393826\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254058 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246lk\" (UniqueName: \"kubernetes.io/projected/7764dd3c-3f1c-45a3-b480-8e80002d4f6d-kube-api-access-246lk\") pod \"machine-config-server-9dvqb\" (UID: \"7764dd3c-3f1c-45a3-b480-8e80002d4f6d\") " pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254085 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4766bca-6e7e-4ce9-acdb-ca266883540c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8xnpm\" (UID: \"d4766bca-6e7e-4ce9-acdb-ca266883540c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.254258 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:39.754242993 +0000 UTC m=+149.265497365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254477 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea1b23cc-880d-4f32-a077-daac8279716a-config\") pod \"service-ca-operator-777779d784-hqpsf\" (UID: \"ea1b23cc-880d-4f32-a077-daac8279716a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254626 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7764dd3c-3f1c-45a3-b480-8e80002d4f6d-node-bootstrap-token\") pod \"machine-config-server-9dvqb\" (UID: \"7764dd3c-3f1c-45a3-b480-8e80002d4f6d\") " pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254655 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7764dd3c-3f1c-45a3-b480-8e80002d4f6d-certs\") pod \"machine-config-server-9dvqb\" (UID: \"7764dd3c-3f1c-45a3-b480-8e80002d4f6d\") " pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254694 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7p5\" (UniqueName: \"kubernetes.io/projected/68495471-e39a-458d-ae4e-0021a7644254-kube-api-access-zz7p5\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254742 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-trusted-ca\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254902 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lf8k7\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.254939 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkjf\" (UniqueName: \"kubernetes.io/projected/d9fd9260-cfde-4ec1-8b3c-c757712369d6-kube-api-access-pfkjf\") pod \"collect-profiles-29409285-9cplb\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255002 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-socket-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255024 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4766bca-6e7e-4ce9-acdb-ca266883540c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8xnpm\" (UID: \"d4766bca-6e7e-4ce9-acdb-ca266883540c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255083 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-registry-tls\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255128 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhsb\" (UniqueName: \"kubernetes.io/projected/fd0d64d0-7952-425c-95d5-5180ed5f588c-kube-api-access-2lhsb\") pod \"marketplace-operator-79b997595-lf8k7\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255152 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxqz\" (UniqueName: \"kubernetes.io/projected/36e3f59c-69a4-423e-8820-58e3309d5aa9-kube-api-access-fdxqz\") pod \"dns-default-xj7f7\" (UID: \"36e3f59c-69a4-423e-8820-58e3309d5aa9\") " pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255254 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74dde675-4516-4165-badb-d7233a017fe1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255294 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36e3f59c-69a4-423e-8820-58e3309d5aa9-metrics-tls\") pod \"dns-default-xj7f7\" (UID: \"36e3f59c-69a4-423e-8820-58e3309d5aa9\") " pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255323 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9fd9260-cfde-4ec1-8b3c-c757712369d6-secret-volume\") pod \"collect-profiles-29409285-9cplb\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255339 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzt4x\" (UniqueName: \"kubernetes.io/projected/9cc63ffe-2f3f-4805-b13b-8da24a393826-kube-api-access-bzt4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-d5h9k\" (UID: \"9cc63ffe-2f3f-4805-b13b-8da24a393826\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255442 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-csi-data-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255571 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-registration-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255648 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/12c4c6bc-892f-4695-ac59-4a930d0a8925-signing-key\") pod \"service-ca-9c57cc56f-tt6fz\" (UID: \"12c4c6bc-892f-4695-ac59-4a930d0a8925\") " pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255728 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7slm\" (UniqueName: \"kubernetes.io/projected/12c4c6bc-892f-4695-ac59-4a930d0a8925-kube-api-access-s7slm\") pod \"service-ca-9c57cc56f-tt6fz\" (UID: \"12c4c6bc-892f-4695-ac59-4a930d0a8925\") " pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255792 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5pj\" (UniqueName: \"kubernetes.io/projected/b9f53556-00ef-4ba6-a73b-880533578d2e-kube-api-access-hr5pj\") pod \"migrator-59844c95c7-v8q2n\" (UID: \"b9f53556-00ef-4ba6-a73b-880533578d2e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255813 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49e299ff-c65c-4b5c-b429-8b816d30bddb-cert\") pod \"ingress-canary-7wlgm\" (UID: \"49e299ff-c65c-4b5c-b429-8b816d30bddb\") " pod="openshift-ingress-canary/ingress-canary-7wlgm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255859 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr29x\" (UniqueName: \"kubernetes.io/projected/5ece886a-bdc2-4c08-b6a8-4fd522409dee-kube-api-access-jr29x\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255925 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74dde675-4516-4165-badb-d7233a017fe1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.255990 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4766bca-6e7e-4ce9-acdb-ca266883540c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8xnpm\" (UID: \"d4766bca-6e7e-4ce9-acdb-ca266883540c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256011 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36e3f59c-69a4-423e-8820-58e3309d5aa9-config-volume\") pod \"dns-default-xj7f7\" (UID: \"36e3f59c-69a4-423e-8820-58e3309d5aa9\") " pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256094 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ece886a-bdc2-4c08-b6a8-4fd522409dee-service-ca-bundle\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256216 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256251 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd9260-cfde-4ec1-8b3c-c757712369d6-config-volume\") pod \"collect-profiles-29409285-9cplb\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256327 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94zx2\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-kube-api-access-94zx2\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256379 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-mountpoint-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256421 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-registry-certificates\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256488 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea1b23cc-880d-4f32-a077-daac8279716a-serving-cert\") pod \"service-ca-operator-777779d784-hqpsf\" (UID: \"ea1b23cc-880d-4f32-a077-daac8279716a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256528 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ece886a-bdc2-4c08-b6a8-4fd522409dee-metrics-certs\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256606 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/12c4c6bc-892f-4695-ac59-4a930d0a8925-signing-cabundle\") pod \"service-ca-9c57cc56f-tt6fz\" (UID: \"12c4c6bc-892f-4695-ac59-4a930d0a8925\") " pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256630 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pskx4\" (UniqueName: \"kubernetes.io/projected/49e299ff-c65c-4b5c-b429-8b816d30bddb-kube-api-access-pskx4\") pod \"ingress-canary-7wlgm\" (UID: \"49e299ff-c65c-4b5c-b429-8b816d30bddb\") " pod="openshift-ingress-canary/ingress-canary-7wlgm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256659 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5ece886a-bdc2-4c08-b6a8-4fd522409dee-default-certificate\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256777 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-plugins-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.262124 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74dde675-4516-4165-badb-d7233a017fe1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.263282 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5ece886a-bdc2-4c08-b6a8-4fd522409dee-stats-auth\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.263389 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-registry-certificates\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.268625 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:39.768608927 +0000 UTC m=+149.279863299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.287313 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lf8k7\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.288519 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4766bca-6e7e-4ce9-acdb-ca266883540c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8xnpm\" (UID: \"d4766bca-6e7e-4ce9-acdb-ca266883540c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.292034 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.293621 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ece886a-bdc2-4c08-b6a8-4fd522409dee-service-ca-bundle\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.293799 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-trusted-ca\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.294128 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc63ffe-2f3f-4805-b13b-8da24a393826-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d5h9k\" (UID: \"9cc63ffe-2f3f-4805-b13b-8da24a393826\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.294255 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc63ffe-2f3f-4805-b13b-8da24a393826-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d5h9k\" (UID: \"9cc63ffe-2f3f-4805-b13b-8da24a393826\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.294576 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd9260-cfde-4ec1-8b3c-c757712369d6-config-volume\") pod \"collect-profiles-29409285-9cplb\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.294666 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lf8k7\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.302909 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/12c4c6bc-892f-4695-ac59-4a930d0a8925-signing-cabundle\") pod \"service-ca-9c57cc56f-tt6fz\" (UID: \"12c4c6bc-892f-4695-ac59-4a930d0a8925\") " pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.256610 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea1b23cc-880d-4f32-a077-daac8279716a-config\") pod \"service-ca-operator-777779d784-hqpsf\" (UID: \"ea1b23cc-880d-4f32-a077-daac8279716a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.303452 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4766bca-6e7e-4ce9-acdb-ca266883540c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8xnpm\" (UID: \"d4766bca-6e7e-4ce9-acdb-ca266883540c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.316197 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74dde675-4516-4165-badb-d7233a017fe1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.317675 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd"] Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.319255 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea1b23cc-880d-4f32-a077-daac8279716a-serving-cert\") pod \"service-ca-operator-777779d784-hqpsf\" (UID: \"ea1b23cc-880d-4f32-a077-daac8279716a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.322940 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5ece886a-bdc2-4c08-b6a8-4fd522409dee-default-certificate\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.324399 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.334392 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxzk8\" (UniqueName: \"kubernetes.io/projected/ea1b23cc-880d-4f32-a077-daac8279716a-kube-api-access-zxzk8\") pod \"service-ca-operator-777779d784-hqpsf\" (UID: \"ea1b23cc-880d-4f32-a077-daac8279716a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.341162 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.346031 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-registry-tls\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.346935 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ece886a-bdc2-4c08-b6a8-4fd522409dee-metrics-certs\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.353554 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5pj\" (UniqueName: \"kubernetes.io/projected/b9f53556-00ef-4ba6-a73b-880533578d2e-kube-api-access-hr5pj\") pod \"migrator-59844c95c7-v8q2n\" (UID: \"b9f53556-00ef-4ba6-a73b-880533578d2e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.355057 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr29x\" (UniqueName: \"kubernetes.io/projected/5ece886a-bdc2-4c08-b6a8-4fd522409dee-kube-api-access-jr29x\") pod \"router-default-5444994796-62v7v\" (UID: \"5ece886a-bdc2-4c08-b6a8-4fd522409dee\") " pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.365055 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9fd9260-cfde-4ec1-8b3c-c757712369d6-secret-volume\") pod \"collect-profiles-29409285-9cplb\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.367179 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.367391 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:39.867362373 +0000 UTC m=+149.378616745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.367564 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-registration-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.367609 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49e299ff-c65c-4b5c-b429-8b816d30bddb-cert\") pod \"ingress-canary-7wlgm\" (UID: \"49e299ff-c65c-4b5c-b429-8b816d30bddb\") " pod="openshift-ingress-canary/ingress-canary-7wlgm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.367914 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36e3f59c-69a4-423e-8820-58e3309d5aa9-config-volume\") pod \"dns-default-xj7f7\" (UID: \"36e3f59c-69a4-423e-8820-58e3309d5aa9\") " pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.368002 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.368183 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36e3f59c-69a4-423e-8820-58e3309d5aa9-config-volume\") pod \"dns-default-xj7f7\" (UID: \"36e3f59c-69a4-423e-8820-58e3309d5aa9\") " pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.368991 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-bound-sa-token\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.370060 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-registration-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.372022 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:39.872007722 +0000 UTC m=+149.383262094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372376 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-mountpoint-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372410 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pskx4\" (UniqueName: \"kubernetes.io/projected/49e299ff-c65c-4b5c-b429-8b816d30bddb-kube-api-access-pskx4\") pod \"ingress-canary-7wlgm\" (UID: \"49e299ff-c65c-4b5c-b429-8b816d30bddb\") " pod="openshift-ingress-canary/ingress-canary-7wlgm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372448 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-plugins-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372480 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-246lk\" (UniqueName: \"kubernetes.io/projected/7764dd3c-3f1c-45a3-b480-8e80002d4f6d-kube-api-access-246lk\") pod \"machine-config-server-9dvqb\" (UID: \"7764dd3c-3f1c-45a3-b480-8e80002d4f6d\") " pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372517 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7764dd3c-3f1c-45a3-b480-8e80002d4f6d-node-bootstrap-token\") pod \"machine-config-server-9dvqb\" (UID: \"7764dd3c-3f1c-45a3-b480-8e80002d4f6d\") " pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372533 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7764dd3c-3f1c-45a3-b480-8e80002d4f6d-certs\") pod \"machine-config-server-9dvqb\" (UID: \"7764dd3c-3f1c-45a3-b480-8e80002d4f6d\") " pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372564 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7p5\" (UniqueName: \"kubernetes.io/projected/68495471-e39a-458d-ae4e-0021a7644254-kube-api-access-zz7p5\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372607 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-socket-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372675 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdxqz\" (UniqueName: \"kubernetes.io/projected/36e3f59c-69a4-423e-8820-58e3309d5aa9-kube-api-access-fdxqz\") pod \"dns-default-xj7f7\" (UID: \"36e3f59c-69a4-423e-8820-58e3309d5aa9\") " pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372700 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36e3f59c-69a4-423e-8820-58e3309d5aa9-metrics-tls\") pod \"dns-default-xj7f7\" (UID: \"36e3f59c-69a4-423e-8820-58e3309d5aa9\") " pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372732 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-csi-data-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372842 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-csi-data-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.372911 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-mountpoint-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.373042 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-plugins-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.375076 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68495471-e39a-458d-ae4e-0021a7644254-socket-dir\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.377925 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj"] Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.382721 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/12c4c6bc-892f-4695-ac59-4a930d0a8925-signing-key\") pod \"service-ca-9c57cc56f-tt6fz\" (UID: \"12c4c6bc-892f-4695-ac59-4a930d0a8925\") " pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.460572 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4766bca-6e7e-4ce9-acdb-ca266883540c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8xnpm\" (UID: \"d4766bca-6e7e-4ce9-acdb-ca266883540c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.460860 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7764dd3c-3f1c-45a3-b480-8e80002d4f6d-node-bootstrap-token\") pod \"machine-config-server-9dvqb\" (UID: \"7764dd3c-3f1c-45a3-b480-8e80002d4f6d\") " pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.461165 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/36e3f59c-69a4-423e-8820-58e3309d5aa9-metrics-tls\") pod \"dns-default-xj7f7\" (UID: \"36e3f59c-69a4-423e-8820-58e3309d5aa9\") " pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.461558 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7764dd3c-3f1c-45a3-b480-8e80002d4f6d-certs\") pod \"machine-config-server-9dvqb\" (UID: \"7764dd3c-3f1c-45a3-b480-8e80002d4f6d\") " pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.462075 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/49e299ff-c65c-4b5c-b429-8b816d30bddb-cert\") pod \"ingress-canary-7wlgm\" (UID: \"49e299ff-c65c-4b5c-b429-8b816d30bddb\") " pod="openshift-ingress-canary/ingress-canary-7wlgm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.473468 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.473890 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:39.97386049 +0000 UTC m=+149.485114862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.489196 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.491606 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzt4x\" (UniqueName: \"kubernetes.io/projected/9cc63ffe-2f3f-4805-b13b-8da24a393826-kube-api-access-bzt4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-d5h9k\" (UID: \"9cc63ffe-2f3f-4805-b13b-8da24a393826\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.499582 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7slm\" (UniqueName: \"kubernetes.io/projected/12c4c6bc-892f-4695-ac59-4a930d0a8925-kube-api-access-s7slm\") pod \"service-ca-9c57cc56f-tt6fz\" (UID: \"12c4c6bc-892f-4695-ac59-4a930d0a8925\") " pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.503508 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkjf\" (UniqueName: \"kubernetes.io/projected/d9fd9260-cfde-4ec1-8b3c-c757712369d6-kube-api-access-pfkjf\") pod \"collect-profiles-29409285-9cplb\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.505844 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhsb\" (UniqueName: \"kubernetes.io/projected/fd0d64d0-7952-425c-95d5-5180ed5f588c-kube-api-access-2lhsb\") pod \"marketplace-operator-79b997595-lf8k7\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.533227 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pskx4\" (UniqueName: \"kubernetes.io/projected/49e299ff-c65c-4b5c-b429-8b816d30bddb-kube-api-access-pskx4\") pod \"ingress-canary-7wlgm\" (UID: \"49e299ff-c65c-4b5c-b429-8b816d30bddb\") " pod="openshift-ingress-canary/ingress-canary-7wlgm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.533778 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94zx2\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-kube-api-access-94zx2\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.534067 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.535485 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2"] Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.544168 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-246lk\" (UniqueName: \"kubernetes.io/projected/7764dd3c-3f1c-45a3-b480-8e80002d4f6d-kube-api-access-246lk\") pod \"machine-config-server-9dvqb\" (UID: \"7764dd3c-3f1c-45a3-b480-8e80002d4f6d\") " pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.563059 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.567059 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdxqz\" (UniqueName: \"kubernetes.io/projected/36e3f59c-69a4-423e-8820-58e3309d5aa9-kube-api-access-fdxqz\") pod \"dns-default-xj7f7\" (UID: \"36e3f59c-69a4-423e-8820-58e3309d5aa9\") " pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.574520 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.576042 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.576438 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.076426216 +0000 UTC m=+149.587680588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.578243 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.586079 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7p5\" (UniqueName: \"kubernetes.io/projected/68495471-e39a-458d-ae4e-0021a7644254-kube-api-access-zz7p5\") pod \"csi-hostpathplugin-j5tcx\" (UID: \"68495471-e39a-458d-ae4e-0021a7644254\") " pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.596064 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.634856 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vmhg8"] Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.636114 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.657226 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.678135 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.678787 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.678939 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.1789249 +0000 UTC m=+149.690179272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.679166 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.683209 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.183191799 +0000 UTC m=+149.694446171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.693532 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-9dvqb" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.700479 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7wlgm" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.771124 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" event={"ID":"14fe6389-fa84-4ec6-8891-0a379e0d4f29","Type":"ContainerStarted","Data":"23c26be5f123e5387e0cdfc0ef757de41c08d7543779bdeadcb8f7acd7c16b0e"} Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.776709 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" event={"ID":"3621b23f-4e41-4a02-b456-7206682db44f","Type":"ContainerStarted","Data":"4d1c41cf04a0de935da50645d277b094a825b390118e92f0e08006ca01b787c3"} Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.780628 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" event={"ID":"95ce7326-f487-4c1e-80e9-cc39e4af2708","Type":"ContainerStarted","Data":"23af62f1213b9a835daf5cb41d9115dfc9faba97ece4e63108b97a7502c714d2"} Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.789490 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.789796 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.289781348 +0000 UTC m=+149.801035720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.791194 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" event={"ID":"d48b50d9-30fa-455f-b3b3-5c781089871f","Type":"ContainerStarted","Data":"b66a9cb771198fece29284145448060d0d9b6c13f469240cd894c43302daa2d1"} Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.803003 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.803482 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" event={"ID":"bdeead84-dfe5-482c-af20-6bad5984c7bf","Type":"ContainerStarted","Data":"7e6cdc4056f01b31b5cc12836be3bfb5137953831c932af7f531f9eed687f72e"} Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.820331 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qb4wv" event={"ID":"865e44df-b483-40e5-9a4f-d78fce50d532","Type":"ContainerStarted","Data":"d7471530ef3b1fd143698691e202efa2b465b3997e924ceb1bb5bd6226153fd2"} Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.820380 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qb4wv" event={"ID":"865e44df-b483-40e5-9a4f-d78fce50d532","Type":"ContainerStarted","Data":"284e6975911621738c451a33fd78f60eabdc9d6f2445afed6b9f4c4f6e9a126e"} Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.820618 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qb4wv" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.823805 4880 patch_prober.go:28] interesting pod/downloads-7954f5f757-qb4wv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.824010 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qb4wv" podUID="865e44df-b483-40e5-9a4f-d78fce50d532" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.830053 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qcvrn"] Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.837696 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" event={"ID":"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5","Type":"ContainerStarted","Data":"6780c38dc3300dd905788363d4f87bf8c8d7aac2e41f42574f9f9929dfa5dea3"} Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.837728 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" event={"ID":"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5","Type":"ContainerStarted","Data":"d4548794c4cd53f4f528090ce748fd0ecb2f6bd60eeb9241e93f6213235e7029"} Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.843143 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 02:58:39 crc kubenswrapper[4880]: I1201 02:58:39.890806 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:39 crc kubenswrapper[4880]: E1201 02:58:39.891394 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.391378801 +0000 UTC m=+149.902633173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:39.997780 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:39.997986 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.49795474 +0000 UTC m=+150.009209112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:39.998051 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:39.998450 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.498443251 +0000 UTC m=+150.009697623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.104430 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.104725 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.604709132 +0000 UTC m=+150.115963504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.205508 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.205803 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.705792013 +0000 UTC m=+150.217046385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.245101 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb"] Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.308152 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.308838 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.808823349 +0000 UTC m=+150.320077721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: W1201 02:58:40.346047 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7764dd3c_3f1c_45a3_b480_8e80002d4f6d.slice/crio-be605222d92db8a4d5611d51e46526bffc6a6a14c015e59a94c393ddae680ae9 WatchSource:0}: Error finding container be605222d92db8a4d5611d51e46526bffc6a6a14c015e59a94c393ddae680ae9: Status 404 returned error can't find the container with id be605222d92db8a4d5611d51e46526bffc6a6a14c015e59a94c393ddae680ae9 Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.349863 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tt724" podStartSLOduration=130.349842563 podStartE2EDuration="2m10.349842563s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:40.347665492 +0000 UTC m=+149.858919864" watchObservedRunningTime="2025-12-01 02:58:40.349842563 +0000 UTC m=+149.861096935" Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.418671 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.419036 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:40.919023882 +0000 UTC m=+150.430278254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.488233 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs"] Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.519920 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.520640 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.020620635 +0000 UTC m=+150.531875007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.524077 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.526173 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.026143403 +0000 UTC m=+150.537397775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.568863 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-twglb"] Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.630379 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.630537 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.130509561 +0000 UTC m=+150.641763933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.630636 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.634943 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.134926453 +0000 UTC m=+150.646180825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.660570 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4"] Dec 01 02:58:40 crc kubenswrapper[4880]: W1201 02:58:40.686969 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86a373c4_7ca5_4e3e_91a7_4dae1241a7fb.slice/crio-1b90407bcd0ce6e5d94a329c9291f92f857428453e3f83199ca9248c5df14d14 WatchSource:0}: Error finding container 1b90407bcd0ce6e5d94a329c9291f92f857428453e3f83199ca9248c5df14d14: Status 404 returned error can't find the container with id 1b90407bcd0ce6e5d94a329c9291f92f857428453e3f83199ca9248c5df14d14 Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.697028 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7"] Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.712512 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" podStartSLOduration=130.712496557 podStartE2EDuration="2m10.712496557s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:40.711481734 +0000 UTC m=+150.222736106" watchObservedRunningTime="2025-12-01 02:58:40.712496557 +0000 UTC m=+150.223750929" Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.735331 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.735490 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.235467432 +0000 UTC m=+150.746721804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.735600 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.735852 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.235844581 +0000 UTC m=+150.747098953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.838157 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.838722 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.338708823 +0000 UTC m=+150.849963195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.905049 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tzxfr"] Dec 01 02:58:40 crc kubenswrapper[4880]: W1201 02:58:40.913161 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a71431a_c15f_457e_9058_577e362c8f8a.slice/crio-ce7eaad6dbb2970e067ceac6f55b0bad19c0f39ba06a9b75dd73b8dfe71acfa2 WatchSource:0}: Error finding container ce7eaad6dbb2970e067ceac6f55b0bad19c0f39ba06a9b75dd73b8dfe71acfa2: Status 404 returned error can't find the container with id ce7eaad6dbb2970e067ceac6f55b0bad19c0f39ba06a9b75dd73b8dfe71acfa2 Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.938201 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-62v7v" event={"ID":"5ece886a-bdc2-4c08-b6a8-4fd522409dee","Type":"ContainerStarted","Data":"deadde4ce55c43a7e1bebed693c468cdf1f17f530043fad7202e9d48e9ec9932"} Dec 01 02:58:40 crc kubenswrapper[4880]: I1201 02:58:40.939025 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:40 crc kubenswrapper[4880]: E1201 02:58:40.939321 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.439309313 +0000 UTC m=+150.950563685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.010451 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" event={"ID":"3621b23f-4e41-4a02-b456-7206682db44f","Type":"ContainerStarted","Data":"2ba26edf797d7d9594eda0749dac3132da9e6f5a96e9e98ad23d875e1af3bd26"} Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.039734 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.040652 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.540631999 +0000 UTC m=+151.051886371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.067583 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qcvrn" event={"ID":"747403d3-576b-4621-8cb3-b9122348ec98","Type":"ContainerStarted","Data":"06b1a4c70d46ee8fda7d107caf63dee891daab163f797a7ab3df15ef027a69cf"} Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.074886 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" event={"ID":"76e86e67-dde6-4b7c-883b-ce22eb444299","Type":"ContainerStarted","Data":"8ae0cf13ea6535156976a363673f52318a82904c5f5b425680b915ac37a49e8e"} Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.082451 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qb4wv" podStartSLOduration=132.082438162 podStartE2EDuration="2m12.082438162s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:41.081296055 +0000 UTC m=+150.592550437" watchObservedRunningTime="2025-12-01 02:58:41.082438162 +0000 UTC m=+150.593692534" Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.109292 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" event={"ID":"a333682a-02d1-4a1e-900e-7a78e7b67317","Type":"ContainerStarted","Data":"2dccbf40f32857418243288baf153df6c9ad4b13102bbc7b8e6336e0c387b02c"} Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.112317 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgdhn"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.145214 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.145594 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.64558189 +0000 UTC m=+151.156836262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.148175 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-htspn"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.149946 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sdrzn"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.158425 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-d8pwf"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.166662 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" event={"ID":"4630aaa6-b6fb-4636-b2b6-4ceb52375b04","Type":"ContainerStarted","Data":"59856aec7f306bd1e47c18b471e5cad773f768ac6bebdcd48fb264bbccc0177f"} Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.185101 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9dvqb" event={"ID":"7764dd3c-3f1c-45a3-b480-8e80002d4f6d","Type":"ContainerStarted","Data":"be605222d92db8a4d5611d51e46526bffc6a6a14c015e59a94c393ddae680ae9"} Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.247634 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.248026 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" event={"ID":"14fe6389-fa84-4ec6-8891-0a379e0d4f29","Type":"ContainerStarted","Data":"33367c9df740fbcc0ccc194f2270ea2f7f8fcfec3e6cb68909d57cece4094078"} Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.249118 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.249198 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.74918372 +0000 UTC m=+151.260438092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.297470 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.365795 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.366196 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.866171381 +0000 UTC m=+151.377425753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.367647 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" event={"ID":"d48b50d9-30fa-455f-b3b3-5c781089871f","Type":"ContainerStarted","Data":"86bde32ff316111141345463f9367f6da100fa083248e4a5642b42bddcc25e19"} Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.378904 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" event={"ID":"86a373c4-7ca5-4e3e-91a7-4dae1241a7fb","Type":"ContainerStarted","Data":"1b90407bcd0ce6e5d94a329c9291f92f857428453e3f83199ca9248c5df14d14"} Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.379261 4880 patch_prober.go:28] interesting pod/downloads-7954f5f757-qb4wv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.379285 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qb4wv" podUID="865e44df-b483-40e5-9a4f-d78fce50d532" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.471638 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.472185 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.972169576 +0000 UTC m=+151.483423948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.472242 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.473811 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:41.973804054 +0000 UTC m=+151.485058426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.474974 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-q4j7p"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.486147 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.575677 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.576033 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.076017491 +0000 UTC m=+151.587271863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.644717 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.676580 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.676829 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.176819496 +0000 UTC m=+151.688073868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.693258 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.735489 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" podStartSLOduration=131.735452109 podStartE2EDuration="2m11.735452109s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:41.731765064 +0000 UTC m=+151.243019436" watchObservedRunningTime="2025-12-01 02:58:41.735452109 +0000 UTC m=+151.246706481" Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.782857 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.784425 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.284388948 +0000 UTC m=+151.795643320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.786376 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57"] Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.794622 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.889339 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:41 crc kubenswrapper[4880]: E1201 02:58:41.889918 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.389907002 +0000 UTC m=+151.901161374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.903915 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2br2c" podStartSLOduration=131.903899917 podStartE2EDuration="2m11.903899917s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:41.874120005 +0000 UTC m=+151.385374377" watchObservedRunningTime="2025-12-01 02:58:41.903899917 +0000 UTC m=+151.415154289" Dec 01 02:58:41 crc kubenswrapper[4880]: I1201 02:58:41.906019 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm"] Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.001420 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.001862 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.501772463 +0000 UTC m=+152.013026835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.002635 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.003148 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.503130635 +0000 UTC m=+152.014385007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.034852 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n"] Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.059185 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf"] Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.103899 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.104059 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.604044052 +0000 UTC m=+152.115298424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.167202 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.167556 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.667544369 +0000 UTC m=+152.178798741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.169303 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" podStartSLOduration=133.169287829 podStartE2EDuration="2m13.169287829s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:42.168476501 +0000 UTC m=+151.679730883" watchObservedRunningTime="2025-12-01 02:58:42.169287829 +0000 UTC m=+151.680542201" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.196581 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7wlgm"] Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.215616 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95ce7326_f487_4c1e_80e9_cc39e4af2708.slice/crio-f49d74a33ec082bd31aa537fdf5dd639ef90ed3a91d64d5f92096fbe26a96f08.scope\": RecentStats: unable to find data in memory cache]" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.236738 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n"] Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.278636 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.285001 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.78497633 +0000 UTC m=+152.296230702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.389814 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.390395 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.890384092 +0000 UTC m=+152.401638464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.429412 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" event={"ID":"61db3a36-06c6-43f4-b78c-90dbb61eb095","Type":"ContainerStarted","Data":"97f0d410f7daf5a8caca7f0bfa74d50125e5df6200b93b13300823fe35288f6b"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.429984 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n" event={"ID":"b9f53556-00ef-4ba6-a73b-880533578d2e","Type":"ContainerStarted","Data":"738efb2fd35d7ac8d947a522186ab84e8ced7a9dc4eaf2f417797c7ec24ad1f8"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.430538 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" event={"ID":"79af8363-2911-45e0-9b07-3421b2626de0","Type":"ContainerStarted","Data":"b6ccc0aaad30fc4b75dbac5d89acf50b5b87c82266adf2f8ac0a56406ec63300"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.431125 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" event={"ID":"9cc63ffe-2f3f-4805-b13b-8da24a393826","Type":"ContainerStarted","Data":"ea79d32569297b2a9ba8c6562441da9a3617d8becd903437417664581c3df15d"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.431968 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-62v7v" event={"ID":"5ece886a-bdc2-4c08-b6a8-4fd522409dee","Type":"ContainerStarted","Data":"5c5ca2717a357b32c990c95298677b2abc1554be6d4412f57fb5172e95751c56"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.433762 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" event={"ID":"2c6e35c8-2541-40a3-8d9e-de756d5b821a","Type":"ContainerStarted","Data":"6c13c0228e01b1b26cdb363d40fdba3817eb118c5e6374c45cbac7878fd1e8e2"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.447812 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx"] Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.460261 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wffb2" event={"ID":"76e86e67-dde6-4b7c-883b-ce22eb444299","Type":"ContainerStarted","Data":"459bfd0e2fdca8ac4a93ce85e300195cc18fef4252e0f7433ee4947f14e1db3d"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.464747 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-62v7v" podStartSLOduration=132.464729571 podStartE2EDuration="2m12.464729571s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:42.464603618 +0000 UTC m=+151.975857990" watchObservedRunningTime="2025-12-01 02:58:42.464729571 +0000 UTC m=+151.975983943" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.465664 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" event={"ID":"4630aaa6-b6fb-4636-b2b6-4ceb52375b04","Type":"ContainerStarted","Data":"52be50bbd69abe271a67bda0c472fc2cdeb9ce0a28f6c3c36186f7a07b51a346"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.467822 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-9dvqb" event={"ID":"7764dd3c-3f1c-45a3-b480-8e80002d4f6d","Type":"ContainerStarted","Data":"bce3bf6ec0b0117b613461fa4394abae2baa446a9d77c866f44ea60df893262e"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.473665 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" event={"ID":"6a21c7c7-1f54-4cfe-af87-11e397fead60","Type":"ContainerStarted","Data":"428242c2109c125cfabe52375a0f1e40f525e1b7a684e4deb3f2f8c0abc88a2d"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.475359 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" event={"ID":"316090ee-bdeb-4d02-aee1-6734a421c126","Type":"ContainerStarted","Data":"a626fb11c1ec3179ac1b8be0a77679eb7af2fbd0a606b1a1b88eb5373acb49ee"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.476146 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" event={"ID":"86a373c4-7ca5-4e3e-91a7-4dae1241a7fb","Type":"ContainerStarted","Data":"5e93e2d506a560c864abe64eff32647f0317950649852dd13225482c6db331e8"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.493549 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.493926 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:42.993907239 +0000 UTC m=+152.505161611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.495726 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.516202 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fj5rb" podStartSLOduration=132.516186088 podStartE2EDuration="2m12.516186088s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:42.515365389 +0000 UTC m=+152.026619761" watchObservedRunningTime="2025-12-01 02:58:42.516186088 +0000 UTC m=+152.027440460" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.518457 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.518485 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.526225 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" event={"ID":"a333682a-02d1-4a1e-900e-7a78e7b67317","Type":"ContainerStarted","Data":"c55b5d8f3337c705c283e171ab01efc80f94172ff2541f7bd0c3ae3f5d45acba"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.589254 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-9dvqb" podStartSLOduration=6.589235857 podStartE2EDuration="6.589235857s" podCreationTimestamp="2025-12-01 02:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:42.559626508 +0000 UTC m=+152.070880880" watchObservedRunningTime="2025-12-01 02:58:42.589235857 +0000 UTC m=+152.100490229" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.590390 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j5tcx"] Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.591970 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" event={"ID":"bdeead84-dfe5-482c-af20-6bad5984c7bf","Type":"ContainerStarted","Data":"4516f1c73a503afb13b350e6b4bb1d40c68dbf6dd08db8baffdc2d305ae4760c"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.596598 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.597387 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.097371806 +0000 UTC m=+152.608626168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.600030 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" event={"ID":"ea1b23cc-880d-4f32-a077-daac8279716a","Type":"ContainerStarted","Data":"fe06400cf1d702e9ec2e17d36ef2608baea3215c45fe62f9a37ba4de934367b7"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.610698 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sdrzn" event={"ID":"b3164eb7-c85e-4eaa-8318-b887832da2e5","Type":"ContainerStarted","Data":"961742992ab1dcb193806d9d3275aaec6aa4c3643ba9f12e34a181d00719bc15"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.635028 4880 generic.go:334] "Generic (PLEG): container finished" podID="d48b50d9-30fa-455f-b3b3-5c781089871f" containerID="86bde32ff316111141345463f9367f6da100fa083248e4a5642b42bddcc25e19" exitCode=0 Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.635115 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" event={"ID":"d48b50d9-30fa-455f-b3b3-5c781089871f","Type":"ContainerDied","Data":"86bde32ff316111141345463f9367f6da100fa083248e4a5642b42bddcc25e19"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.635142 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" event={"ID":"d48b50d9-30fa-455f-b3b3-5c781089871f","Type":"ContainerStarted","Data":"dd56050cfb7d9f6762d5cc55f014296554e8dcaa105e9922198b3f4fe8746cdc"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.635844 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.650355 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" event={"ID":"8a71431a-c15f-457e-9058-577e362c8f8a","Type":"ContainerStarted","Data":"3c1a14b8937ea1acaa76b9a5c5472564e983e6ef8f7c3bb0cdd33aab29d55f42"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.650388 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" event={"ID":"8a71431a-c15f-457e-9058-577e362c8f8a","Type":"ContainerStarted","Data":"ce7eaad6dbb2970e067ceac6f55b0bad19c0f39ba06a9b75dd73b8dfe71acfa2"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.650939 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.652966 4880 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v8mf4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.653008 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" podUID="8a71431a-c15f-457e-9058-577e362c8f8a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.653046 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" event={"ID":"d7cca742-7184-431c-b480-273f5fbe6dba","Type":"ContainerStarted","Data":"f9ad87f9937b2367ca69eec419bc8e05a4268387ad401734b94dbe2d00545c9d"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.665308 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" event={"ID":"59b10596-74c6-4b9f-aaa9-69d60015a048","Type":"ContainerStarted","Data":"e00217f1a88f6a305fe6796f570e8a1b54af4fa80003902abe2d01fd98faca0d"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.689222 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" event={"ID":"939a734d-ecb7-43f1-a7be-e05668e0cc32","Type":"ContainerStarted","Data":"5bb732dc47b17d6d8b4641e98fe18bf21ee36fdd71b94282d8b73294456f2390"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.697137 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" event={"ID":"d4766bca-6e7e-4ce9-acdb-ca266883540c","Type":"ContainerStarted","Data":"e715b0e1510b57e1e2037c504ddbd62a8190055aadb025d0132b2df6f3d939a4"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.697613 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.698532 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.198512458 +0000 UTC m=+152.709766830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.701120 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qcvrn" event={"ID":"747403d3-576b-4621-8cb3-b9122348ec98","Type":"ContainerStarted","Data":"51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.703611 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vmhg8" podStartSLOduration=132.703594486 podStartE2EDuration="2m12.703594486s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:42.689341675 +0000 UTC m=+152.200596047" watchObservedRunningTime="2025-12-01 02:58:42.703594486 +0000 UTC m=+152.214848858" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.706211 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lf8k7"] Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.725214 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" event={"ID":"8bea9ed3-3bfc-4f97-bf60-d544e791e5f5","Type":"ContainerStarted","Data":"679e0daee2ec7b6738e088a03d62fbec3922b977d18c8549f8bce83c51e7b718"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.731759 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" event={"ID":"f36bde77-88b0-46fb-b33d-85c7c430ab11","Type":"ContainerStarted","Data":"82168d82327dbcb8d9d17d40796163da7ae099e2664a67e7d3e4a085f54262c0"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.732929 4880 generic.go:334] "Generic (PLEG): container finished" podID="95ce7326-f487-4c1e-80e9-cc39e4af2708" containerID="f49d74a33ec082bd31aa537fdf5dd639ef90ed3a91d64d5f92096fbe26a96f08" exitCode=0 Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.732970 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" event={"ID":"95ce7326-f487-4c1e-80e9-cc39e4af2708","Type":"ContainerDied","Data":"f49d74a33ec082bd31aa537fdf5dd639ef90ed3a91d64d5f92096fbe26a96f08"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.736931 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" event={"ID":"6f0c30e2-825b-40cc-8c89-f454853ded08","Type":"ContainerStarted","Data":"4f3ff941f3df84a5fd50b068618177cb66e7cffe084560dfcb0eefbe7ad99282"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.738162 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" event={"ID":"5ab44990-cfce-4e00-8566-b8902400d263","Type":"ContainerStarted","Data":"fe6c3f15137694a178da5bd872ac95f22b177ee482c30d9535b7486ca384a7b4"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.738193 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" event={"ID":"5ab44990-cfce-4e00-8566-b8902400d263","Type":"ContainerStarted","Data":"ad4fe8224e8341c30d50636ebef7973cd5fc22388dd37184a7bb49e1ef8d39b7"} Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.739415 4880 patch_prober.go:28] interesting pod/downloads-7954f5f757-qb4wv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.739443 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qb4wv" podUID="865e44df-b483-40e5-9a4f-d78fce50d532" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.802934 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.804827 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.304815281 +0000 UTC m=+152.816069653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.887212 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" podStartSLOduration=133.887195567 podStartE2EDuration="2m13.887195567s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:42.780231099 +0000 UTC m=+152.291485481" watchObservedRunningTime="2025-12-01 02:58:42.887195567 +0000 UTC m=+152.398449939" Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.889299 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tt6fz"] Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.905322 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xj7f7"] Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.906767 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:42 crc kubenswrapper[4880]: I1201 02:58:42.907466 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb"] Dec 01 02:58:42 crc kubenswrapper[4880]: E1201 02:58:42.907534 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.407517959 +0000 UTC m=+152.918772321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.024501 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.024822 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.524808467 +0000 UTC m=+153.036062839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.101818 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" podStartSLOduration=133.101802608 podStartE2EDuration="2m13.101802608s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:42.912194098 +0000 UTC m=+152.423448480" watchObservedRunningTime="2025-12-01 02:58:43.101802608 +0000 UTC m=+152.613056980" Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.126163 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.126762 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.626745818 +0000 UTC m=+153.138000190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.186665 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bbwxx" podStartSLOduration=134.186645081 podStartE2EDuration="2m14.186645081s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:43.103341814 +0000 UTC m=+152.614596186" watchObservedRunningTime="2025-12-01 02:58:43.186645081 +0000 UTC m=+152.697899453" Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.228552 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.228857 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.728844633 +0000 UTC m=+153.240099005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.251791 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qcvrn" podStartSLOduration=134.251776876 podStartE2EDuration="2m14.251776876s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:43.187814498 +0000 UTC m=+152.699068870" watchObservedRunningTime="2025-12-01 02:58:43.251776876 +0000 UTC m=+152.763031238" Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.329243 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.331526 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.8314982 +0000 UTC m=+153.342752572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.333430 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.333806 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.833793144 +0000 UTC m=+153.345047516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.434432 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.434672 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:43.934654009 +0000 UTC m=+153.445908391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.497952 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:43 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:43 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:43 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.497994 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.538652 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.539210 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.039199051 +0000 UTC m=+153.550453423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.641179 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.641536 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.14151189 +0000 UTC m=+153.652766262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.743400 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.743888 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.2438627 +0000 UTC m=+153.755117062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.750238 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" event={"ID":"6a21c7c7-1f54-4cfe-af87-11e397fead60","Type":"ContainerStarted","Data":"d8ac3f5d20115e1bdf8ad023118df00a4099f71767a28ed40242a7691c130f43"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.751112 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.751905 4880 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cqg4c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.751931 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" podUID="6a21c7c7-1f54-4cfe-af87-11e397fead60" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.763192 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7wlgm" event={"ID":"49e299ff-c65c-4b5c-b429-8b816d30bddb","Type":"ContainerStarted","Data":"a525853a81044425e50300ed7783141ee319890d151249a851f83e96f57bca8f"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.763229 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7wlgm" event={"ID":"49e299ff-c65c-4b5c-b429-8b816d30bddb","Type":"ContainerStarted","Data":"2c28d8987c908b4e391df84cf97de5562bdad88e2781ce11d02cafc4a10893d9"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.768291 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" event={"ID":"61db3a36-06c6-43f4-b78c-90dbb61eb095","Type":"ContainerStarted","Data":"370d2289dd147cfef5f707ae551cbd67f8d493ecb5837876222adfe2b4473c79"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.776864 4880 generic.go:334] "Generic (PLEG): container finished" podID="79af8363-2911-45e0-9b07-3421b2626de0" containerID="69b0df93d73ea6cce55140d0277ff8bc962902bacfe3172b89199f34ee69ce0d" exitCode=0 Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.777489 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" event={"ID":"79af8363-2911-45e0-9b07-3421b2626de0","Type":"ContainerDied","Data":"69b0df93d73ea6cce55140d0277ff8bc962902bacfe3172b89199f34ee69ce0d"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.782577 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" event={"ID":"2c6e35c8-2541-40a3-8d9e-de756d5b821a","Type":"ContainerStarted","Data":"23591cb2f905184481491dd495370b40888851ffb9215b1e5f0a5b8a0b182819"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.782750 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" podStartSLOduration=133.782733254 podStartE2EDuration="2m13.782733254s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:43.78171034 +0000 UTC m=+153.292964712" watchObservedRunningTime="2025-12-01 02:58:43.782733254 +0000 UTC m=+153.293987626" Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.798709 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" event={"ID":"9cc63ffe-2f3f-4805-b13b-8da24a393826","Type":"ContainerStarted","Data":"eeda7bac87a830a6d51edb2f8c33439745292064ba416fb3655825f26df2232b"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.801232 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" event={"ID":"316090ee-bdeb-4d02-aee1-6734a421c126","Type":"ContainerStarted","Data":"6c6228227e49fd7f746a9b99c05df568f19bf75b33e46904a28f7dd9650c1340"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.803131 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" event={"ID":"86a373c4-7ca5-4e3e-91a7-4dae1241a7fb","Type":"ContainerStarted","Data":"2a657f8637ae0bc644beefdecf6a2af7ac0d7169275b1ff4597e23257ce13a2b"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.803468 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.809561 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" event={"ID":"6f0c30e2-825b-40cc-8c89-f454853ded08","Type":"ContainerStarted","Data":"161e4bc2f7f80128cfa6811a729fdad099b1b8b11c8f56470095f271b0d2fe14"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.811585 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7wlgm" podStartSLOduration=7.811576455 podStartE2EDuration="7.811576455s" podCreationTimestamp="2025-12-01 02:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:43.809576138 +0000 UTC m=+153.320830510" watchObservedRunningTime="2025-12-01 02:58:43.811576455 +0000 UTC m=+153.322830827" Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.816002 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" event={"ID":"fd0d64d0-7952-425c-95d5-5180ed5f588c","Type":"ContainerStarted","Data":"86725aed7757c303017d2cfcadbe4f852979dd894d67125c6980ebff820dc36f"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.824224 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xj7f7" event={"ID":"36e3f59c-69a4-423e-8820-58e3309d5aa9","Type":"ContainerStarted","Data":"21ad90c2101aba934c3466514befea6769e9bee478c084b2ca5a858dca480ee3"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.844263 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.844336 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.344321016 +0000 UTC m=+153.855575388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.846226 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.848244 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.348234507 +0000 UTC m=+153.859488879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.894227 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" event={"ID":"68495471-e39a-458d-ae4e-0021a7644254","Type":"ContainerStarted","Data":"e34f45469eda16606d83ed31014f5e6464d6731355bf46bcf29490d360352e34"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.897860 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" event={"ID":"59b10596-74c6-4b9f-aaa9-69d60015a048","Type":"ContainerStarted","Data":"30f5dbf02f11b6ca42773624482e296c8a16132f60421b4cc443de70fa270bbb"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.910917 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" event={"ID":"57f8716f-6b5d-4f01-9341-674dba56876a","Type":"ContainerStarted","Data":"35a41d317f866d069e9a83f198ef09f7b6eb8029ce9ba4b6b41d2ac2b1c47ebe"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.959657 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:43 crc kubenswrapper[4880]: E1201 02:58:43.960027 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.460012527 +0000 UTC m=+153.971266899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.960452 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" event={"ID":"d9fd9260-cfde-4ec1-8b3c-c757712369d6","Type":"ContainerStarted","Data":"62a8f60651ca71973eeff6f36ae9ecc111576c69c2ea917bbce367abfeab2bd9"} Dec 01 02:58:43 crc kubenswrapper[4880]: I1201 02:58:43.960587 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d5h9k" podStartSLOduration=133.96057767 podStartE2EDuration="2m13.96057767s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:43.957172771 +0000 UTC m=+153.468427133" watchObservedRunningTime="2025-12-01 02:58:43.96057767 +0000 UTC m=+153.471832042" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.002230 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" event={"ID":"f36bde77-88b0-46fb-b33d-85c7c430ab11","Type":"ContainerStarted","Data":"5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f"} Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.003294 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.014421 4880 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vgdhn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.014477 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" podUID="f36bde77-88b0-46fb-b33d-85c7c430ab11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.038742 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-khjz7" podStartSLOduration=134.038722308 podStartE2EDuration="2m14.038722308s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:44.035215216 +0000 UTC m=+153.546469598" watchObservedRunningTime="2025-12-01 02:58:44.038722308 +0000 UTC m=+153.549976680" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.039371 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" podStartSLOduration=134.039364273 podStartE2EDuration="2m14.039364273s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:44.003169231 +0000 UTC m=+153.514423603" watchObservedRunningTime="2025-12-01 02:58:44.039364273 +0000 UTC m=+153.550618645" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.046145 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" event={"ID":"939a734d-ecb7-43f1-a7be-e05668e0cc32","Type":"ContainerStarted","Data":"aff8ad5e85f8623db71ca3112e6b102b950e1b69d216cfa34dfa882a47482209"} Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.066325 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.068510 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.56849446 +0000 UTC m=+154.079748832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.088031 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" podStartSLOduration=134.088015364 podStartE2EDuration="2m14.088015364s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:44.084773629 +0000 UTC m=+153.596028001" watchObservedRunningTime="2025-12-01 02:58:44.088015364 +0000 UTC m=+153.599269736" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.095864 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n" event={"ID":"b9f53556-00ef-4ba6-a73b-880533578d2e","Type":"ContainerStarted","Data":"c4309ac61520f4489709926bb117b900e6734f5feaf617f1d767aeec3c6e888a"} Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.118933 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gns66" podStartSLOduration=134.118919753 podStartE2EDuration="2m14.118919753s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:44.11836335 +0000 UTC m=+153.629617722" watchObservedRunningTime="2025-12-01 02:58:44.118919753 +0000 UTC m=+153.630174125" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.129520 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" event={"ID":"d4766bca-6e7e-4ce9-acdb-ca266883540c","Type":"ContainerStarted","Data":"29200e0408084fb257043530af5d5ec307f7fb7a7e18dd4db4181d9a93d29fb7"} Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.161944 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8xnpm" podStartSLOduration=134.161922673 podStartE2EDuration="2m14.161922673s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:44.151623494 +0000 UTC m=+153.662877866" watchObservedRunningTime="2025-12-01 02:58:44.161922673 +0000 UTC m=+153.673177045" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.168757 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.169060 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.669046369 +0000 UTC m=+154.180300741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.179052 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" event={"ID":"12c4c6bc-892f-4695-ac59-4a930d0a8925","Type":"ContainerStarted","Data":"e22859ca93ad14ee8c4450b2d4c5a4a71ac98c3f988ca929946468073841bcc5"} Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.196569 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sdrzn" event={"ID":"b3164eb7-c85e-4eaa-8318-b887832da2e5","Type":"ContainerStarted","Data":"752d89c310390c451dd8f136e227e17ccda09af33223b2bb18950c68fc3e46ca"} Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.197398 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.204616 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" event={"ID":"8550af07-b9df-4fcf-bdd9-7c282f1f4e88","Type":"ContainerStarted","Data":"c58450812202775e03fc41bd74c6246b0d4051e83a8651d18f39f120ccda175f"} Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.204647 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" event={"ID":"8550af07-b9df-4fcf-bdd9-7c282f1f4e88","Type":"ContainerStarted","Data":"fd12ce67c211cfc397d1fb2883ec88a60a471c26afc214d84817e107804f15c7"} Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.218824 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" podStartSLOduration=134.218810186 podStartE2EDuration="2m14.218810186s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:44.216892932 +0000 UTC m=+153.728147304" watchObservedRunningTime="2025-12-01 02:58:44.218810186 +0000 UTC m=+153.730064558" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.279771 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.289453 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.789440399 +0000 UTC m=+154.300694771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.396179 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.396830 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:44.896813956 +0000 UTC m=+154.408068328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.504454 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.504779 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.004768917 +0000 UTC m=+154.516023279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.507042 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:44 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:44 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:44 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.507093 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.605219 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.605394 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.105366367 +0000 UTC m=+154.616620739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.605471 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.605747 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.105740055 +0000 UTC m=+154.616994427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.706110 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.706423 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.206396546 +0000 UTC m=+154.717650908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.706637 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.706995 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.20697764 +0000 UTC m=+154.718232012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.824190 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v8mf4" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.824646 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.825137 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.325119058 +0000 UTC m=+154.836373430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.866817 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sdrzn" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.881897 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sdrzn" podStartSLOduration=135.881883998 podStartE2EDuration="2m15.881883998s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:44.248432205 +0000 UTC m=+153.759686587" watchObservedRunningTime="2025-12-01 02:58:44.881883998 +0000 UTC m=+154.393138360" Dec 01 02:58:44 crc kubenswrapper[4880]: I1201 02:58:44.928679 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:44 crc kubenswrapper[4880]: E1201 02:58:44.929005 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.428993464 +0000 UTC m=+154.940247836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.034086 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.034368 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.534354204 +0000 UTC m=+155.045608576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.135983 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.136294 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.636283715 +0000 UTC m=+155.147538087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.230032 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" event={"ID":"d9fd9260-cfde-4ec1-8b3c-c757712369d6","Type":"ContainerStarted","Data":"986f68b703de6eaef39b42051948c64a8455f000e22ac1e864d6516dd6aa5f1e"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.236444 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.236748 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.736733791 +0000 UTC m=+155.247988163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.237785 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" event={"ID":"79af8363-2911-45e0-9b07-3421b2626de0","Type":"ContainerStarted","Data":"c4eb370681b18607aec57236ffcd9164e115bf012b7444b8b8d5fd6f6f38b6ea"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.246501 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" event={"ID":"95ce7326-f487-4c1e-80e9-cc39e4af2708","Type":"ContainerStarted","Data":"caa84ad0c0d00bcb284ad28907bd94b43d28e89d6045f73888a0325f9b7adc9e"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.255355 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n" event={"ID":"b9f53556-00ef-4ba6-a73b-880533578d2e","Type":"ContainerStarted","Data":"0dba3b5d62cdff017c8c0bb446b35e37b759b82f5784ddd0c50d2ffad82a366d"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.266414 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" event={"ID":"fd0d64d0-7952-425c-95d5-5180ed5f588c","Type":"ContainerStarted","Data":"2ff6338ce89514589b92b64ac3a7b3c79448a8091f3063824b2badd303036142"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.267231 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.268616 4880 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lf8k7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.268676 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" podUID="fd0d64d0-7952-425c-95d5-5180ed5f588c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.269853 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" event={"ID":"61db3a36-06c6-43f4-b78c-90dbb61eb095","Type":"ContainerStarted","Data":"5742481481fa5dd5350bf71ae24c6cb2bd23dc7b8efca48cef6abaecc6d0fe2d"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.275114 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" event={"ID":"68495471-e39a-458d-ae4e-0021a7644254","Type":"ContainerStarted","Data":"eb0a1b1877ffa1ed96c7c2846aea3b7b09b211e14071a4c828bddd817f6cb855"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.276831 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" podStartSLOduration=136.276820283 podStartE2EDuration="2m16.276820283s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.259681565 +0000 UTC m=+154.770935937" watchObservedRunningTime="2025-12-01 02:58:45.276820283 +0000 UTC m=+154.788074645" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.278584 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v8q2n" podStartSLOduration=135.278577934 podStartE2EDuration="2m15.278577934s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.276208569 +0000 UTC m=+154.787462941" watchObservedRunningTime="2025-12-01 02:58:45.278577934 +0000 UTC m=+154.789832306" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.298334 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" event={"ID":"2c6e35c8-2541-40a3-8d9e-de756d5b821a","Type":"ContainerStarted","Data":"f6d3ab4688d425911c57a999d71aaa77e478d3f89bc1eab0575893d87c993b5e"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.304545 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" podStartSLOduration=135.304531468 podStartE2EDuration="2m15.304531468s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.304087458 +0000 UTC m=+154.815341840" watchObservedRunningTime="2025-12-01 02:58:45.304531468 +0000 UTC m=+154.815785840" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.324989 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" event={"ID":"8550af07-b9df-4fcf-bdd9-7c282f1f4e88","Type":"ContainerStarted","Data":"f9112a78ec7ca7c3b6d788f1786eb2ada3bc158b1cb9b3e7e0dd8ce1b3ef01ff"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.337385 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" podStartSLOduration=135.337368752 podStartE2EDuration="2m15.337368752s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.336678685 +0000 UTC m=+154.847933057" watchObservedRunningTime="2025-12-01 02:58:45.337368752 +0000 UTC m=+154.848623124" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.337543 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.339509 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.839498511 +0000 UTC m=+155.350752883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.339669 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" event={"ID":"57f8716f-6b5d-4f01-9341-674dba56876a","Type":"ContainerStarted","Data":"d1140f1a90cf9c084f1f2d44efda7e92d2c623f1d90ddfa21672314fa1e7d1e1"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.340075 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.341612 4880 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m6p9n container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.341647 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" podUID="57f8716f-6b5d-4f01-9341-674dba56876a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.350973 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" event={"ID":"bdeead84-dfe5-482c-af20-6bad5984c7bf","Type":"ContainerStarted","Data":"4cafb0e33915c0cdf75c3f8ef194ece34c1081c1babde78e60930ccc16c7b82c"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.361400 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" event={"ID":"d7cca742-7184-431c-b480-273f5fbe6dba","Type":"ContainerStarted","Data":"6c5b97fc0e51629b807acf28b673e126d76aee6398038cb9e92ddbf54812a2f1"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.378645 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x44c7" podStartSLOduration=136.378626881 podStartE2EDuration="2m16.378626881s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.374190868 +0000 UTC m=+154.885445240" watchObservedRunningTime="2025-12-01 02:58:45.378626881 +0000 UTC m=+154.889881253" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.384897 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" event={"ID":"6f0c30e2-825b-40cc-8c89-f454853ded08","Type":"ContainerStarted","Data":"4c89d0eb79a9a392f21d72ac1e1d950c3e81f85301a95b0df4e2035da982eec6"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.398913 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" event={"ID":"5ab44990-cfce-4e00-8566-b8902400d263","Type":"ContainerStarted","Data":"ad5fd284eb0ea8c782c1271cfc3420adffa64468a1ac7cc5f5a571826e1e38b1"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.403494 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-q4j7p" podStartSLOduration=135.403480739 podStartE2EDuration="2m15.403480739s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.403340136 +0000 UTC m=+154.914594508" watchObservedRunningTime="2025-12-01 02:58:45.403480739 +0000 UTC m=+154.914735111" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.406520 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xj7f7" event={"ID":"36e3f59c-69a4-423e-8820-58e3309d5aa9","Type":"ContainerStarted","Data":"43eb41959a5efda9a684f82c8acd374cad7d7597dd18c49ffb66eb3013714946"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.407113 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.419042 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tt6fz" event={"ID":"12c4c6bc-892f-4695-ac59-4a930d0a8925","Type":"ContainerStarted","Data":"9773cd314ccf193db55a13dc3918bc2ebec48a4f67d1e9fc9cdd849b8d49880c"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.433553 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gmlbx" podStartSLOduration=135.433539898 podStartE2EDuration="2m15.433539898s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.432248188 +0000 UTC m=+154.943502550" watchObservedRunningTime="2025-12-01 02:58:45.433539898 +0000 UTC m=+154.944794270" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.441308 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.442414 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:45.942400614 +0000 UTC m=+155.453654986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.461340 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" event={"ID":"ea1b23cc-880d-4f32-a077-daac8279716a","Type":"ContainerStarted","Data":"36733424a9760e6e53657a8c4b75f7fef349043a0aa27c1604ae1fb0e8a70539"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.496016 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ctn57" podStartSLOduration=135.495999651 podStartE2EDuration="2m15.495999651s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.468167834 +0000 UTC m=+154.979422206" watchObservedRunningTime="2025-12-01 02:58:45.495999651 +0000 UTC m=+155.007254023" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.499249 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" event={"ID":"59b10596-74c6-4b9f-aaa9-69d60015a048","Type":"ContainerStarted","Data":"5d60598c06870d8a254e6542f5c1d953d78b7f8724454a8347b1012bc748bd19"} Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.512274 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:45 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:45 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:45 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.512324 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.531222 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cqg4c" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.532893 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" podStartSLOduration=135.532881799 podStartE2EDuration="2m15.532881799s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.497949406 +0000 UTC m=+155.009203778" watchObservedRunningTime="2025-12-01 02:58:45.532881799 +0000 UTC m=+155.044136161" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.534506 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xx59g" podStartSLOduration=135.534497806 podStartE2EDuration="2m15.534497806s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.531887706 +0000 UTC m=+155.043142088" watchObservedRunningTime="2025-12-01 02:58:45.534497806 +0000 UTC m=+155.045752178" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.543663 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.576471 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.076456402 +0000 UTC m=+155.587710774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.595476 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n6tkj" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.612110 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twglb" podStartSLOduration=135.612088371 podStartE2EDuration="2m15.612088371s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.595637458 +0000 UTC m=+155.106891830" watchObservedRunningTime="2025-12-01 02:58:45.612088371 +0000 UTC m=+155.123342743" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.646511 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.646909 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.14689088 +0000 UTC m=+155.658145252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.677054 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-htspn" podStartSLOduration=135.677037572 podStartE2EDuration="2m15.677037572s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.654291283 +0000 UTC m=+155.165545655" watchObservedRunningTime="2025-12-01 02:58:45.677037572 +0000 UTC m=+155.188291944" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.731422 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hqpsf" podStartSLOduration=135.731405786 podStartE2EDuration="2m15.731405786s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.70665523 +0000 UTC m=+155.217909602" watchObservedRunningTime="2025-12-01 02:58:45.731405786 +0000 UTC m=+155.242660158" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.747841 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.748171 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.248159916 +0000 UTC m=+155.759414288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.828517 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-d8pwf" podStartSLOduration=135.828497464 podStartE2EDuration="2m15.828497464s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.765736995 +0000 UTC m=+155.276991367" watchObservedRunningTime="2025-12-01 02:58:45.828497464 +0000 UTC m=+155.339751836" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.829304 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xj7f7" podStartSLOduration=9.829300603 podStartE2EDuration="9.829300603s" podCreationTimestamp="2025-12-01 02:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:45.828689289 +0000 UTC m=+155.339943671" watchObservedRunningTime="2025-12-01 02:58:45.829300603 +0000 UTC m=+155.340554975" Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.850007 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.850277 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.35026163 +0000 UTC m=+155.861516002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:45 crc kubenswrapper[4880]: I1201 02:58:45.951299 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:45 crc kubenswrapper[4880]: E1201 02:58:45.951605 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.451594467 +0000 UTC m=+155.962848829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.051915 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.052292 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.552265119 +0000 UTC m=+156.063519491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.052335 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.052795 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.5527791 +0000 UTC m=+156.064033472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.153649 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.153839 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.65381439 +0000 UTC m=+156.165068762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.153924 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.154319 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.654311322 +0000 UTC m=+156.165565694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.191054 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65qhf"] Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.191910 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.197402 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.251606 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65qhf"] Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.255387 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.255542 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.755520876 +0000 UTC m=+156.266775248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.255615 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-catalog-content\") pod \"certified-operators-65qhf\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.255762 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.255808 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s46r\" (UniqueName: \"kubernetes.io/projected/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-kube-api-access-4s46r\") pod \"certified-operators-65qhf\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.255838 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-utilities\") pod \"certified-operators-65qhf\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.256154 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.75614776 +0000 UTC m=+156.267402132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.356428 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.356995 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-catalog-content\") pod \"certified-operators-65qhf\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.357080 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s46r\" (UniqueName: \"kubernetes.io/projected/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-kube-api-access-4s46r\") pod \"certified-operators-65qhf\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.357107 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-utilities\") pod \"certified-operators-65qhf\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.357481 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-utilities\") pod \"certified-operators-65qhf\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.357557 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.857535469 +0000 UTC m=+156.368789841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.357782 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-catalog-content\") pod \"certified-operators-65qhf\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.364081 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.376312 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j8wd4"] Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.377292 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.379577 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.403939 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s46r\" (UniqueName: \"kubernetes.io/projected/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-kube-api-access-4s46r\") pod \"certified-operators-65qhf\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.410992 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8wd4"] Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.459444 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-utilities\") pod \"community-operators-j8wd4\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.459496 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-catalog-content\") pod \"community-operators-j8wd4\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.459535 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.459556 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbjwn\" (UniqueName: \"kubernetes.io/projected/d076e21e-9946-4ee4-9953-7c0a3830c0fc-kube-api-access-nbjwn\") pod \"community-operators-j8wd4\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.459810 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:46.959799587 +0000 UTC m=+156.471053959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.501951 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:46 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:46 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:46 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.502278 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.505125 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.507410 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xj7f7" event={"ID":"36e3f59c-69a4-423e-8820-58e3309d5aa9","Type":"ContainerStarted","Data":"d64f54f8314eaf16536214d6659455cfd4cdd0afb3041bd6719f3e1afcbc44b4"} Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.509838 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" event={"ID":"79af8363-2911-45e0-9b07-3421b2626de0","Type":"ContainerStarted","Data":"f6947820f7de504cc7537be3d11c47d7f1bbcb05370dcc29f8c1ce007263db34"} Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.512947 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" event={"ID":"68495471-e39a-458d-ae4e-0021a7644254","Type":"ContainerStarted","Data":"f57af6eaaaf836adff123973a6f62827b4735eb5320ca0367c4bb3420373d87e"} Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.513101 4880 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lf8k7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.513131 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" podUID="fd0d64d0-7952-425c-95d5-5180ed5f588c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.553120 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" podStartSLOduration=136.553102147 podStartE2EDuration="2m16.553102147s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:46.551565851 +0000 UTC m=+156.062820223" watchObservedRunningTime="2025-12-01 02:58:46.553102147 +0000 UTC m=+156.064356519" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.562418 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.562628 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbjwn\" (UniqueName: \"kubernetes.io/projected/d076e21e-9946-4ee4-9953-7c0a3830c0fc-kube-api-access-nbjwn\") pod \"community-operators-j8wd4\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.562977 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-utilities\") pod \"community-operators-j8wd4\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.582709 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m6p9n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.593430 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-utilities\") pod \"community-operators-j8wd4\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.596326 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-catalog-content\") pod \"community-operators-j8wd4\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.597101 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.09708137 +0000 UTC m=+156.608335742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.599258 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-catalog-content\") pod \"community-operators-j8wd4\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.626383 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dv65n"] Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.635121 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.640852 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbjwn\" (UniqueName: \"kubernetes.io/projected/d076e21e-9946-4ee4-9953-7c0a3830c0fc-kube-api-access-nbjwn\") pod \"community-operators-j8wd4\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.644517 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dv65n"] Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.715542 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-catalog-content\") pod \"certified-operators-dv65n\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.715608 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.715679 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snzb5\" (UniqueName: \"kubernetes.io/projected/058f5e4b-8c67-4ac4-ba76-857541f70949-kube-api-access-snzb5\") pod \"certified-operators-dv65n\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.715696 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-utilities\") pod \"certified-operators-dv65n\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.716156 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.718242 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.218225137 +0000 UTC m=+156.729479509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.806180 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4cxsv"] Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.807408 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.818389 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.818545 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.31853041 +0000 UTC m=+156.829784782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.818693 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.818749 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snzb5\" (UniqueName: \"kubernetes.io/projected/058f5e4b-8c67-4ac4-ba76-857541f70949-kube-api-access-snzb5\") pod \"certified-operators-dv65n\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.818777 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-utilities\") pod \"certified-operators-dv65n\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.818830 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-catalog-content\") pod \"certified-operators-dv65n\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.819208 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-catalog-content\") pod \"certified-operators-dv65n\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.819301 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-utilities\") pod \"certified-operators-dv65n\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.819602 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.319587735 +0000 UTC m=+156.830842107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.837339 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4cxsv"] Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.850253 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snzb5\" (UniqueName: \"kubernetes.io/projected/058f5e4b-8c67-4ac4-ba76-857541f70949-kube-api-access-snzb5\") pod \"certified-operators-dv65n\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.936582 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.937072 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6x27\" (UniqueName: \"kubernetes.io/projected/164c787a-f422-44ea-9cac-99166ce43f0b-kube-api-access-n6x27\") pod \"community-operators-4cxsv\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.937113 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-catalog-content\") pod \"community-operators-4cxsv\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:46 crc kubenswrapper[4880]: I1201 02:58:46.937150 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-utilities\") pod \"community-operators-4cxsv\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:46 crc kubenswrapper[4880]: E1201 02:58:46.937303 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.437285672 +0000 UTC m=+156.948540044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.001928 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.048487 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6x27\" (UniqueName: \"kubernetes.io/projected/164c787a-f422-44ea-9cac-99166ce43f0b-kube-api-access-n6x27\") pod \"community-operators-4cxsv\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.048518 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-catalog-content\") pod \"community-operators-4cxsv\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.048553 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-utilities\") pod \"community-operators-4cxsv\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.048574 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.048857 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.548846557 +0000 UTC m=+157.060100929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.049751 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-utilities\") pod \"community-operators-4cxsv\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.049783 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-catalog-content\") pod \"community-operators-4cxsv\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.081448 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6x27\" (UniqueName: \"kubernetes.io/projected/164c787a-f422-44ea-9cac-99166ce43f0b-kube-api-access-n6x27\") pod \"community-operators-4cxsv\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.135203 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.154799 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.155167 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.655152059 +0000 UTC m=+157.166406421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.255862 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.256235 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.756220559 +0000 UTC m=+157.267474931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.327031 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65qhf"] Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.349279 4880 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.356839 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.357285 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.857266409 +0000 UTC m=+157.368520791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.369175 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.369220 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.458809 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.459406 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:47.959394835 +0000 UTC m=+157.470649207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.496858 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:47 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:47 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:47 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.496920 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.554108 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" event={"ID":"68495471-e39a-458d-ae4e-0021a7644254","Type":"ContainerStarted","Data":"893a6d1fb985992f17c29cd05e679d1f2d56d9d75ba406b901247e3cd30bdffa"} Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.561271 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qhf" event={"ID":"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4","Type":"ContainerStarted","Data":"ba3167329db5c782fc257a625a173ac766e81d0f9a3f93a821c5214c9a85a807"} Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.561749 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.562074 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:48.062058952 +0000 UTC m=+157.573313324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.565922 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8wd4"] Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.662682 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.664817 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:48.164804872 +0000 UTC m=+157.676059254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.764921 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.765389 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:48.265368851 +0000 UTC m=+157.776623223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.779664 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4cxsv"] Dec 01 02:58:47 crc kubenswrapper[4880]: W1201 02:58:47.795021 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164c787a_f422_44ea_9cac_99166ce43f0b.slice/crio-603e8ce4af9229d9afad199ea9a0bbb0468979d16f9338af539852a1b1a4f8cb WatchSource:0}: Error finding container 603e8ce4af9229d9afad199ea9a0bbb0468979d16f9338af539852a1b1a4f8cb: Status 404 returned error can't find the container with id 603e8ce4af9229d9afad199ea9a0bbb0468979d16f9338af539852a1b1a4f8cb Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.836905 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dv65n"] Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.866949 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.867254 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 02:58:48.36724342 +0000 UTC m=+157.878497792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zld26" (UID: "74dde675-4516-4165-badb-d7233a017fe1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.967445 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:47 crc kubenswrapper[4880]: E1201 02:58:47.967784 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 02:58:48.467767978 +0000 UTC m=+157.979022340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 02:58:47 crc kubenswrapper[4880]: I1201 02:58:47.971300 4880 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-01T02:58:47.349301714Z","Handler":null,"Name":""} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.016021 4880 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.016064 4880 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.069313 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.071453 4880 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.071476 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.086923 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zld26\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.156316 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.171289 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.214328 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.366698 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rstnz"] Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.376754 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zld26"] Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.376897 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.377702 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rstnz"] Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.402360 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.475386 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-catalog-content\") pod \"redhat-marketplace-rstnz\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.475543 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-utilities\") pod \"redhat-marketplace-rstnz\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.475613 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfq7x\" (UniqueName: \"kubernetes.io/projected/07ddb3cc-a464-4645-8b0c-7475a9b75330-kube-api-access-cfq7x\") pod \"redhat-marketplace-rstnz\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.493448 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:48 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:48 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:48 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.493525 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.524635 4880 patch_prober.go:28] interesting pod/downloads-7954f5f757-qb4wv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.524669 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qb4wv" podUID="865e44df-b483-40e5-9a4f-d78fce50d532" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.524674 4880 patch_prober.go:28] interesting pod/downloads-7954f5f757-qb4wv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.524719 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qb4wv" podUID="865e44df-b483-40e5-9a4f-d78fce50d532" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.569623 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" event={"ID":"68495471-e39a-458d-ae4e-0021a7644254","Type":"ContainerStarted","Data":"77e35a0a5b45ad61e53e43c09555a785d67913c34851921204a1553713bec4f6"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.572032 4880 generic.go:334] "Generic (PLEG): container finished" podID="164c787a-f422-44ea-9cac-99166ce43f0b" containerID="56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66" exitCode=0 Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.572115 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cxsv" event={"ID":"164c787a-f422-44ea-9cac-99166ce43f0b","Type":"ContainerDied","Data":"56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.572142 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cxsv" event={"ID":"164c787a-f422-44ea-9cac-99166ce43f0b","Type":"ContainerStarted","Data":"603e8ce4af9229d9afad199ea9a0bbb0468979d16f9338af539852a1b1a4f8cb"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.573222 4880 generic.go:334] "Generic (PLEG): container finished" podID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerID="beae3fd649ff9a666fa737987a228725fc7962b9ed86f7eb8f4286d12eadba08" exitCode=0 Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.573303 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8wd4" event={"ID":"d076e21e-9946-4ee4-9953-7c0a3830c0fc","Type":"ContainerDied","Data":"beae3fd649ff9a666fa737987a228725fc7962b9ed86f7eb8f4286d12eadba08"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.573328 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8wd4" event={"ID":"d076e21e-9946-4ee4-9953-7c0a3830c0fc","Type":"ContainerStarted","Data":"6de45d2ab0dadde6cddbec4f15bc2b87456246c2f5e5a4b610bd95e177ec711a"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.574371 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.577348 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-catalog-content\") pod \"redhat-marketplace-rstnz\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.577375 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-utilities\") pod \"redhat-marketplace-rstnz\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.577394 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfq7x\" (UniqueName: \"kubernetes.io/projected/07ddb3cc-a464-4645-8b0c-7475a9b75330-kube-api-access-cfq7x\") pod \"redhat-marketplace-rstnz\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.577984 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-catalog-content\") pod \"redhat-marketplace-rstnz\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.578209 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-utilities\") pod \"redhat-marketplace-rstnz\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.584490 4880 generic.go:334] "Generic (PLEG): container finished" podID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerID="c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64" exitCode=0 Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.586295 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv65n" event={"ID":"058f5e4b-8c67-4ac4-ba76-857541f70949","Type":"ContainerDied","Data":"c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.586459 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv65n" event={"ID":"058f5e4b-8c67-4ac4-ba76-857541f70949","Type":"ContainerStarted","Data":"15dc8927a560f079f7dd8958031037b02267acb5f33bb588758e031948fd67c8"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.590728 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" event={"ID":"74dde675-4516-4165-badb-d7233a017fe1","Type":"ContainerStarted","Data":"7dfd91502f8986ac343e40e5e74e0135d31e7d78e014e2e1d1c40ff7c48a36cf"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.590777 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.590789 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" event={"ID":"74dde675-4516-4165-badb-d7233a017fe1","Type":"ContainerStarted","Data":"f6990c0605c627a1605ed253e259548380cce104db5507e3e52c5249496598fe"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.593422 4880 generic.go:334] "Generic (PLEG): container finished" podID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerID="444dfb8e1a6cbdb934467c7ade48718463639e238655a9727687b93798b62c58" exitCode=0 Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.593618 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qhf" event={"ID":"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4","Type":"ContainerDied","Data":"444dfb8e1a6cbdb934467c7ade48718463639e238655a9727687b93798b62c58"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.602587 4880 generic.go:334] "Generic (PLEG): container finished" podID="d9fd9260-cfde-4ec1-8b3c-c757712369d6" containerID="986f68b703de6eaef39b42051948c64a8455f000e22ac1e864d6516dd6aa5f1e" exitCode=0 Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.603761 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" event={"ID":"d9fd9260-cfde-4ec1-8b3c-c757712369d6","Type":"ContainerDied","Data":"986f68b703de6eaef39b42051948c64a8455f000e22ac1e864d6516dd6aa5f1e"} Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.613326 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfq7x\" (UniqueName: \"kubernetes.io/projected/07ddb3cc-a464-4645-8b0c-7475a9b75330-kube-api-access-cfq7x\") pod \"redhat-marketplace-rstnz\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.615097 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-j5tcx" podStartSLOduration=12.615074003 podStartE2EDuration="12.615074003s" podCreationTimestamp="2025-12-01 02:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:48.595197891 +0000 UTC m=+158.106452283" watchObservedRunningTime="2025-12-01 02:58:48.615074003 +0000 UTC m=+158.126328385" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.689713 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" podStartSLOduration=138.689696469 podStartE2EDuration="2m18.689696469s" podCreationTimestamp="2025-12-01 02:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:48.686918784 +0000 UTC m=+158.198173186" watchObservedRunningTime="2025-12-01 02:58:48.689696469 +0000 UTC m=+158.200950841" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.714959 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.772008 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.772166 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.772846 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8fq8c"] Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.773787 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.794441 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.794830 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fq8c"] Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.794931 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.843510 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.853192 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.860403 4880 patch_prober.go:28] interesting pod/console-f9d7485db-qcvrn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.860477 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qcvrn" podUID="747403d3-576b-4621-8cb3-b9122348ec98" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.881777 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-utilities\") pod \"redhat-marketplace-8fq8c\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.881813 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-catalog-content\") pod \"redhat-marketplace-8fq8c\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.881900 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7nhs\" (UniqueName: \"kubernetes.io/projected/4838ff1a-deaf-4970-bf9b-acc638e7aadc-kube-api-access-l7nhs\") pod \"redhat-marketplace-8fq8c\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.968470 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rstnz"] Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.973292 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.973358 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.983579 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-utilities\") pod \"redhat-marketplace-8fq8c\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.983631 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-catalog-content\") pod \"redhat-marketplace-8fq8c\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.983780 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7nhs\" (UniqueName: \"kubernetes.io/projected/4838ff1a-deaf-4970-bf9b-acc638e7aadc-kube-api-access-l7nhs\") pod \"redhat-marketplace-8fq8c\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.984477 4880 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tzxfr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]log ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]etcd ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/generic-apiserver-start-informers ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/max-in-flight-filter ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 01 02:58:48 crc kubenswrapper[4880]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 01 02:58:48 crc kubenswrapper[4880]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/project.openshift.io-projectcache ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/openshift.io-startinformers ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 01 02:58:48 crc kubenswrapper[4880]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 01 02:58:48 crc kubenswrapper[4880]: livez check failed Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.984526 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" podUID="79af8363-2911-45e0-9b07-3421b2626de0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.984770 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-catalog-content\") pod \"redhat-marketplace-8fq8c\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:48 crc kubenswrapper[4880]: I1201 02:58:48.984818 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-utilities\") pod \"redhat-marketplace-8fq8c\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.000388 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7nhs\" (UniqueName: \"kubernetes.io/projected/4838ff1a-deaf-4970-bf9b-acc638e7aadc-kube-api-access-l7nhs\") pod \"redhat-marketplace-8fq8c\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.096269 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.312262 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fq8c"] Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.376260 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q8d88"] Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.377204 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.380453 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.385720 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8d88"] Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.391016 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-catalog-content\") pod \"redhat-operators-q8d88\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.391060 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdrp\" (UniqueName: \"kubernetes.io/projected/55a2d619-2948-4157-a5ab-a2e2a9247cc2-kube-api-access-hbdrp\") pod \"redhat-operators-q8d88\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.391130 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-utilities\") pod \"redhat-operators-q8d88\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.490287 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.491937 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-catalog-content\") pod \"redhat-operators-q8d88\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.491978 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbdrp\" (UniqueName: \"kubernetes.io/projected/55a2d619-2948-4157-a5ab-a2e2a9247cc2-kube-api-access-hbdrp\") pod \"redhat-operators-q8d88\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.492025 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-utilities\") pod \"redhat-operators-q8d88\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.492467 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-utilities\") pod \"redhat-operators-q8d88\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.492500 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-catalog-content\") pod \"redhat-operators-q8d88\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.493682 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:49 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:49 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:49 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.493856 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.512135 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbdrp\" (UniqueName: \"kubernetes.io/projected/55a2d619-2948-4157-a5ab-a2e2a9247cc2-kube-api-access-hbdrp\") pod \"redhat-operators-q8d88\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.608049 4880 generic.go:334] "Generic (PLEG): container finished" podID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerID="38010a4b04df2fcd5c8370db8f4255ae71135639b4c95b3d3f46c6b73da2e766" exitCode=0 Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.608115 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rstnz" event={"ID":"07ddb3cc-a464-4645-8b0c-7475a9b75330","Type":"ContainerDied","Data":"38010a4b04df2fcd5c8370db8f4255ae71135639b4c95b3d3f46c6b73da2e766"} Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.608142 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rstnz" event={"ID":"07ddb3cc-a464-4645-8b0c-7475a9b75330","Type":"ContainerStarted","Data":"5ce6e92e65c3c18b8ce0069d17049b121bfedef66e8b5f3d370e787584d945e2"} Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.614967 4880 generic.go:334] "Generic (PLEG): container finished" podID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerID="fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5" exitCode=0 Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.629601 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fq8c" event={"ID":"4838ff1a-deaf-4970-bf9b-acc638e7aadc","Type":"ContainerDied","Data":"fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5"} Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.629632 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fq8c" event={"ID":"4838ff1a-deaf-4970-bf9b-acc638e7aadc","Type":"ContainerStarted","Data":"6a3656822845560a0010be2eda4c0eb031fb4bb973aa2ea6e92e9b990e0e3f2b"} Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.640193 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bcrdd" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.676259 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.727737 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.738897 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.740797 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.744206 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.744487 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.756339 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.807350 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-55sbb"] Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.828124 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.847666 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.874931 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55sbb"] Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.899353 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-catalog-content\") pod \"redhat-operators-55sbb\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.899405 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f4ff07d-41a9-43e9-884e-afd4093d198f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f4ff07d-41a9-43e9-884e-afd4093d198f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.899421 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-895ns\" (UniqueName: \"kubernetes.io/projected/123479b4-c08d-4081-8f32-3c0609583ed6-kube-api-access-895ns\") pod \"redhat-operators-55sbb\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.899438 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f4ff07d-41a9-43e9-884e-afd4093d198f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f4ff07d-41a9-43e9-884e-afd4093d198f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:58:49 crc kubenswrapper[4880]: I1201 02:58:49.899451 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-utilities\") pod \"redhat-operators-55sbb\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.000414 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-catalog-content\") pod \"redhat-operators-55sbb\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.000769 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f4ff07d-41a9-43e9-884e-afd4093d198f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f4ff07d-41a9-43e9-884e-afd4093d198f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.000787 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-895ns\" (UniqueName: \"kubernetes.io/projected/123479b4-c08d-4081-8f32-3c0609583ed6-kube-api-access-895ns\") pod \"redhat-operators-55sbb\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.000804 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f4ff07d-41a9-43e9-884e-afd4093d198f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f4ff07d-41a9-43e9-884e-afd4093d198f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.000821 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-utilities\") pod \"redhat-operators-55sbb\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.001282 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-utilities\") pod \"redhat-operators-55sbb\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.001485 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-catalog-content\") pod \"redhat-operators-55sbb\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.002324 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f4ff07d-41a9-43e9-884e-afd4093d198f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f4ff07d-41a9-43e9-884e-afd4093d198f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.039483 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f4ff07d-41a9-43e9-884e-afd4093d198f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f4ff07d-41a9-43e9-884e-afd4093d198f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.063939 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-895ns\" (UniqueName: \"kubernetes.io/projected/123479b4-c08d-4081-8f32-3c0609583ed6-kube-api-access-895ns\") pod \"redhat-operators-55sbb\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.078186 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.169294 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.226280 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8d88"] Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.243746 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:50 crc kubenswrapper[4880]: W1201 02:58:50.266251 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a2d619_2948_4157_a5ab_a2e2a9247cc2.slice/crio-a0fac64fef673b6a922de2a61dbf0ed2ff66da4ed5791b90f63d191e90490bdc WatchSource:0}: Error finding container a0fac64fef673b6a922de2a61dbf0ed2ff66da4ed5791b90f63d191e90490bdc: Status 404 returned error can't find the container with id a0fac64fef673b6a922de2a61dbf0ed2ff66da4ed5791b90f63d191e90490bdc Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.307408 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd9260-cfde-4ec1-8b3c-c757712369d6-config-volume\") pod \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.307511 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9fd9260-cfde-4ec1-8b3c-c757712369d6-secret-volume\") pod \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.307575 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfkjf\" (UniqueName: \"kubernetes.io/projected/d9fd9260-cfde-4ec1-8b3c-c757712369d6-kube-api-access-pfkjf\") pod \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\" (UID: \"d9fd9260-cfde-4ec1-8b3c-c757712369d6\") " Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.308184 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9fd9260-cfde-4ec1-8b3c-c757712369d6-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9fd9260-cfde-4ec1-8b3c-c757712369d6" (UID: "d9fd9260-cfde-4ec1-8b3c-c757712369d6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.331377 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9fd9260-cfde-4ec1-8b3c-c757712369d6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d9fd9260-cfde-4ec1-8b3c-c757712369d6" (UID: "d9fd9260-cfde-4ec1-8b3c-c757712369d6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.332243 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9fd9260-cfde-4ec1-8b3c-c757712369d6-kube-api-access-pfkjf" (OuterVolumeSpecName: "kube-api-access-pfkjf") pod "d9fd9260-cfde-4ec1-8b3c-c757712369d6" (UID: "d9fd9260-cfde-4ec1-8b3c-c757712369d6"). InnerVolumeSpecName "kube-api-access-pfkjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.411060 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9fd9260-cfde-4ec1-8b3c-c757712369d6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.411649 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfkjf\" (UniqueName: \"kubernetes.io/projected/d9fd9260-cfde-4ec1-8b3c-c757712369d6-kube-api-access-pfkjf\") on node \"crc\" DevicePath \"\"" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.411679 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9fd9260-cfde-4ec1-8b3c-c757712369d6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.503787 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:50 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:50 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:50 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.503832 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.625082 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55sbb"] Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.641123 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8d88" event={"ID":"55a2d619-2948-4157-a5ab-a2e2a9247cc2","Type":"ContainerStarted","Data":"a0fac64fef673b6a922de2a61dbf0ed2ff66da4ed5791b90f63d191e90490bdc"} Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.667162 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.667211 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb" event={"ID":"d9fd9260-cfde-4ec1-8b3c-c757712369d6","Type":"ContainerDied","Data":"62a8f60651ca71973eeff6f36ae9ecc111576c69c2ea917bbce367abfeab2bd9"} Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.667234 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a8f60651ca71973eeff6f36ae9ecc111576c69c2ea917bbce367abfeab2bd9" Dec 01 02:58:50 crc kubenswrapper[4880]: I1201 02:58:50.698356 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 02:58:50 crc kubenswrapper[4880]: W1201 02:58:50.722541 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3f4ff07d_41a9_43e9_884e_afd4093d198f.slice/crio-cde50bf0962ca0b03fde1b8d38a6086a7540afb07926a4e2427bdd2fcf3ef04e WatchSource:0}: Error finding container cde50bf0962ca0b03fde1b8d38a6086a7540afb07926a4e2427bdd2fcf3ef04e: Status 404 returned error can't find the container with id cde50bf0962ca0b03fde1b8d38a6086a7540afb07926a4e2427bdd2fcf3ef04e Dec 01 02:58:51 crc kubenswrapper[4880]: I1201 02:58:51.492548 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:51 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:51 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:51 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:51 crc kubenswrapper[4880]: I1201 02:58:51.493260 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:51 crc kubenswrapper[4880]: I1201 02:58:51.674721 4880 generic.go:334] "Generic (PLEG): container finished" podID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerID="7e90ed69aa8f07e79a5e74f5b1c354eb149e385f2f3ea57ae866b0fabf6dfdf3" exitCode=0 Dec 01 02:58:51 crc kubenswrapper[4880]: I1201 02:58:51.675627 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8d88" event={"ID":"55a2d619-2948-4157-a5ab-a2e2a9247cc2","Type":"ContainerDied","Data":"7e90ed69aa8f07e79a5e74f5b1c354eb149e385f2f3ea57ae866b0fabf6dfdf3"} Dec 01 02:58:51 crc kubenswrapper[4880]: I1201 02:58:51.679437 4880 generic.go:334] "Generic (PLEG): container finished" podID="123479b4-c08d-4081-8f32-3c0609583ed6" containerID="a8b4699ef27590421cdc4883df4d3c538cb9d66c0727f39f62a46962be1060d9" exitCode=0 Dec 01 02:58:51 crc kubenswrapper[4880]: I1201 02:58:51.679489 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55sbb" event={"ID":"123479b4-c08d-4081-8f32-3c0609583ed6","Type":"ContainerDied","Data":"a8b4699ef27590421cdc4883df4d3c538cb9d66c0727f39f62a46962be1060d9"} Dec 01 02:58:51 crc kubenswrapper[4880]: I1201 02:58:51.679547 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55sbb" event={"ID":"123479b4-c08d-4081-8f32-3c0609583ed6","Type":"ContainerStarted","Data":"b27880daac6cde12f20d2dc3caf88d3439aa893a035d92130c89401834044e83"} Dec 01 02:58:51 crc kubenswrapper[4880]: I1201 02:58:51.684537 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3f4ff07d-41a9-43e9-884e-afd4093d198f","Type":"ContainerStarted","Data":"cde50bf0962ca0b03fde1b8d38a6086a7540afb07926a4e2427bdd2fcf3ef04e"} Dec 01 02:58:52 crc kubenswrapper[4880]: I1201 02:58:52.355108 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:52 crc kubenswrapper[4880]: I1201 02:58:52.360337 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60f88b82-c5e9-4f47-91c1-4e78498b481e-metrics-certs\") pod \"network-metrics-daemon-chtvv\" (UID: \"60f88b82-c5e9-4f47-91c1-4e78498b481e\") " pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:52 crc kubenswrapper[4880]: I1201 02:58:52.492652 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:52 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:52 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:52 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:52 crc kubenswrapper[4880]: I1201 02:58:52.492719 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:52 crc kubenswrapper[4880]: I1201 02:58:52.606241 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-chtvv" Dec 01 02:58:52 crc kubenswrapper[4880]: I1201 02:58:52.705597 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3f4ff07d-41a9-43e9-884e-afd4093d198f","Type":"ContainerStarted","Data":"c5fa927d72258aeb9204d840c1045a30f90b588cdad70a5f18d531f294afd640"} Dec 01 02:58:52 crc kubenswrapper[4880]: I1201 02:58:52.733499 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.733482768 podStartE2EDuration="3.733482768s" podCreationTimestamp="2025-12-01 02:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:52.719280458 +0000 UTC m=+162.230534840" watchObservedRunningTime="2025-12-01 02:58:52.733482768 +0000 UTC m=+162.244737140" Dec 01 02:58:52 crc kubenswrapper[4880]: I1201 02:58:52.939915 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-chtvv"] Dec 01 02:58:52 crc kubenswrapper[4880]: W1201 02:58:52.963326 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60f88b82_c5e9_4f47_91c1_4e78498b481e.slice/crio-f0bfaca4e7b47c1001945b22eeda9e422db5a4d2adf0eb3ad88e4252bfd652d3 WatchSource:0}: Error finding container f0bfaca4e7b47c1001945b22eeda9e422db5a4d2adf0eb3ad88e4252bfd652d3: Status 404 returned error can't find the container with id f0bfaca4e7b47c1001945b22eeda9e422db5a4d2adf0eb3ad88e4252bfd652d3 Dec 01 02:58:53 crc kubenswrapper[4880]: I1201 02:58:53.493522 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:53 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:53 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:53 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:53 crc kubenswrapper[4880]: I1201 02:58:53.493808 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:53 crc kubenswrapper[4880]: I1201 02:58:53.712437 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-chtvv" event={"ID":"60f88b82-c5e9-4f47-91c1-4e78498b481e","Type":"ContainerStarted","Data":"f0bfaca4e7b47c1001945b22eeda9e422db5a4d2adf0eb3ad88e4252bfd652d3"} Dec 01 02:58:53 crc kubenswrapper[4880]: I1201 02:58:53.714386 4880 generic.go:334] "Generic (PLEG): container finished" podID="3f4ff07d-41a9-43e9-884e-afd4093d198f" containerID="c5fa927d72258aeb9204d840c1045a30f90b588cdad70a5f18d531f294afd640" exitCode=0 Dec 01 02:58:53 crc kubenswrapper[4880]: I1201 02:58:53.714425 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3f4ff07d-41a9-43e9-884e-afd4093d198f","Type":"ContainerDied","Data":"c5fa927d72258aeb9204d840c1045a30f90b588cdad70a5f18d531f294afd640"} Dec 01 02:58:53 crc kubenswrapper[4880]: I1201 02:58:53.990742 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:53 crc kubenswrapper[4880]: I1201 02:58:53.995281 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tzxfr" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.257705 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 02:58:54 crc kubenswrapper[4880]: E1201 02:58:54.257915 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fd9260-cfde-4ec1-8b3c-c757712369d6" containerName="collect-profiles" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.257927 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fd9260-cfde-4ec1-8b3c-c757712369d6" containerName="collect-profiles" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.258034 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fd9260-cfde-4ec1-8b3c-c757712369d6" containerName="collect-profiles" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.258379 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.260556 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.260805 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.271692 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.399779 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6676084e-213f-4c59-ab0e-9390022fe860-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6676084e-213f-4c59-ab0e-9390022fe860\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.400062 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6676084e-213f-4c59-ab0e-9390022fe860-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6676084e-213f-4c59-ab0e-9390022fe860\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.493282 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:54 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:54 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:54 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.493333 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.501176 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6676084e-213f-4c59-ab0e-9390022fe860-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6676084e-213f-4c59-ab0e-9390022fe860\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.501203 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6676084e-213f-4c59-ab0e-9390022fe860-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6676084e-213f-4c59-ab0e-9390022fe860\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.501312 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6676084e-213f-4c59-ab0e-9390022fe860-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6676084e-213f-4c59-ab0e-9390022fe860\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.526905 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6676084e-213f-4c59-ab0e-9390022fe860-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6676084e-213f-4c59-ab0e-9390022fe860\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.615721 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.661117 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xj7f7" Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.748151 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-chtvv" event={"ID":"60f88b82-c5e9-4f47-91c1-4e78498b481e","Type":"ContainerStarted","Data":"3c64a47e2a2380ea8562bb2a07ca7df470db0a9ee3165ce67a921256a8ea41c0"} Dec 01 02:58:54 crc kubenswrapper[4880]: I1201 02:58:54.748182 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-chtvv" event={"ID":"60f88b82-c5e9-4f47-91c1-4e78498b481e","Type":"ContainerStarted","Data":"f717ceec0b0c3eef4f423b3fe1e55df5d3df2c6e33f182a74aee061ff9f741c2"} Dec 01 02:58:55 crc kubenswrapper[4880]: I1201 02:58:55.300022 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-chtvv" podStartSLOduration=146.299980568 podStartE2EDuration="2m26.299980568s" podCreationTimestamp="2025-12-01 02:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:58:54.769680235 +0000 UTC m=+164.280934617" watchObservedRunningTime="2025-12-01 02:58:55.299980568 +0000 UTC m=+164.811234940" Dec 01 02:58:55 crc kubenswrapper[4880]: I1201 02:58:55.314171 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 02:58:55 crc kubenswrapper[4880]: I1201 02:58:55.492346 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:55 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:55 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:55 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:55 crc kubenswrapper[4880]: I1201 02:58:55.492429 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:56 crc kubenswrapper[4880]: I1201 02:58:56.492057 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:56 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:56 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:56 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:56 crc kubenswrapper[4880]: I1201 02:58:56.492419 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:57 crc kubenswrapper[4880]: I1201 02:58:57.491402 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:57 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:57 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:57 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:57 crc kubenswrapper[4880]: I1201 02:58:57.491448 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:58 crc kubenswrapper[4880]: I1201 02:58:58.494170 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:58 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:58 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:58 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:58 crc kubenswrapper[4880]: I1201 02:58:58.494223 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:58:58 crc kubenswrapper[4880]: I1201 02:58:58.530017 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qb4wv" Dec 01 02:58:58 crc kubenswrapper[4880]: I1201 02:58:58.844140 4880 patch_prober.go:28] interesting pod/console-f9d7485db-qcvrn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 01 02:58:58 crc kubenswrapper[4880]: I1201 02:58:58.844195 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qcvrn" podUID="747403d3-576b-4621-8cb3-b9122348ec98" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 01 02:58:59 crc kubenswrapper[4880]: I1201 02:58:59.492300 4880 patch_prober.go:28] interesting pod/router-default-5444994796-62v7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 02:58:59 crc kubenswrapper[4880]: [-]has-synced failed: reason withheld Dec 01 02:58:59 crc kubenswrapper[4880]: [+]process-running ok Dec 01 02:58:59 crc kubenswrapper[4880]: healthz check failed Dec 01 02:58:59 crc kubenswrapper[4880]: I1201 02:58:59.493943 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62v7v" podUID="5ece886a-bdc2-4c08-b6a8-4fd522409dee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 02:59:00 crc kubenswrapper[4880]: I1201 02:59:00.492927 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:59:00 crc kubenswrapper[4880]: I1201 02:59:00.495708 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-62v7v" Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.141373 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.334097 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f4ff07d-41a9-43e9-884e-afd4093d198f-kubelet-dir\") pod \"3f4ff07d-41a9-43e9-884e-afd4093d198f\" (UID: \"3f4ff07d-41a9-43e9-884e-afd4093d198f\") " Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.334258 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f4ff07d-41a9-43e9-884e-afd4093d198f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f4ff07d-41a9-43e9-884e-afd4093d198f" (UID: "3f4ff07d-41a9-43e9-884e-afd4093d198f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.334265 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f4ff07d-41a9-43e9-884e-afd4093d198f-kube-api-access\") pod \"3f4ff07d-41a9-43e9-884e-afd4093d198f\" (UID: \"3f4ff07d-41a9-43e9-884e-afd4093d198f\") " Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.334669 4880 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f4ff07d-41a9-43e9-884e-afd4093d198f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.339163 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4ff07d-41a9-43e9-884e-afd4093d198f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f4ff07d-41a9-43e9-884e-afd4093d198f" (UID: "3f4ff07d-41a9-43e9-884e-afd4093d198f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.436028 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f4ff07d-41a9-43e9-884e-afd4093d198f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.840902 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6676084e-213f-4c59-ab0e-9390022fe860","Type":"ContainerStarted","Data":"34941629cdb5d191124f2acec0dd504442dbc794cf38b236694ad74baa8d7dc6"} Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.842769 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3f4ff07d-41a9-43e9-884e-afd4093d198f","Type":"ContainerDied","Data":"cde50bf0962ca0b03fde1b8d38a6086a7540afb07926a4e2427bdd2fcf3ef04e"} Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.842792 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cde50bf0962ca0b03fde1b8d38a6086a7540afb07926a4e2427bdd2fcf3ef04e" Dec 01 02:59:05 crc kubenswrapper[4880]: I1201 02:59:05.842837 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 02:59:08 crc kubenswrapper[4880]: I1201 02:59:08.163285 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 02:59:08 crc kubenswrapper[4880]: I1201 02:59:08.849070 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:59:08 crc kubenswrapper[4880]: I1201 02:59:08.853977 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 02:59:17 crc kubenswrapper[4880]: I1201 02:59:17.369002 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 02:59:17 crc kubenswrapper[4880]: I1201 02:59:17.369466 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 02:59:17 crc kubenswrapper[4880]: I1201 02:59:17.511933 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 02:59:18 crc kubenswrapper[4880]: I1201 02:59:18.942800 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4npfs" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.813632 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.814241 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7nhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8fq8c_openshift-marketplace(4838ff1a-deaf-4970-bf9b-acc638e7aadc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.816375 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8fq8c" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.870315 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.870502 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snzb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dv65n_openshift-marketplace(058f5e4b-8c67-4ac4-ba76-857541f70949): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.872731 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dv65n" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.886592 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.886767 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nbjwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j8wd4_openshift-marketplace(d076e21e-9946-4ee4-9953-7c0a3830c0fc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.888906 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j8wd4" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.932460 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.932612 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s46r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-65qhf_openshift-marketplace(b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.933798 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-65qhf" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.967654 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j8wd4" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.970405 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dv65n" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" Dec 01 02:59:20 crc kubenswrapper[4880]: E1201 02:59:20.973756 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8fq8c" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.954342 4880 generic.go:334] "Generic (PLEG): container finished" podID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerID="b58ef27910f83c3030c5447e18713be060ae3e5b5169fbd36df7a71d14384b34" exitCode=0 Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.954418 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rstnz" event={"ID":"07ddb3cc-a464-4645-8b0c-7475a9b75330","Type":"ContainerDied","Data":"b58ef27910f83c3030c5447e18713be060ae3e5b5169fbd36df7a71d14384b34"} Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.959622 4880 generic.go:334] "Generic (PLEG): container finished" podID="123479b4-c08d-4081-8f32-3c0609583ed6" containerID="7b80cc2ab4ce5509f2ae0e9ca4bc945cb2bdcf1b4a13b6c0a02d3c1eebef7800" exitCode=0 Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.959733 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55sbb" event={"ID":"123479b4-c08d-4081-8f32-3c0609583ed6","Type":"ContainerDied","Data":"7b80cc2ab4ce5509f2ae0e9ca4bc945cb2bdcf1b4a13b6c0a02d3c1eebef7800"} Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.967976 4880 generic.go:334] "Generic (PLEG): container finished" podID="164c787a-f422-44ea-9cac-99166ce43f0b" containerID="2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8" exitCode=0 Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.968001 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cxsv" event={"ID":"164c787a-f422-44ea-9cac-99166ce43f0b","Type":"ContainerDied","Data":"2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8"} Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.971073 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6676084e-213f-4c59-ab0e-9390022fe860","Type":"ContainerStarted","Data":"af8fbc61801a825d3eb646baf5203637477afb2b2d0363c9f5f326912bc3434b"} Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.977486 4880 generic.go:334] "Generic (PLEG): container finished" podID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerID="75eed8ffa6cce1862bccda3657f7d740dba9af8111522a1b03a6f0fc6c13538d" exitCode=0 Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.977674 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8d88" event={"ID":"55a2d619-2948-4157-a5ab-a2e2a9247cc2","Type":"ContainerDied","Data":"75eed8ffa6cce1862bccda3657f7d740dba9af8111522a1b03a6f0fc6c13538d"} Dec 01 02:59:21 crc kubenswrapper[4880]: E1201 02:59:21.980743 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-65qhf" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" Dec 01 02:59:21 crc kubenswrapper[4880]: I1201 02:59:21.994708 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=27.994690495 podStartE2EDuration="27.994690495s" podCreationTimestamp="2025-12-01 02:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:59:21.989836932 +0000 UTC m=+191.501091294" watchObservedRunningTime="2025-12-01 02:59:21.994690495 +0000 UTC m=+191.505944867" Dec 01 02:59:22 crc kubenswrapper[4880]: I1201 02:59:22.984500 4880 generic.go:334] "Generic (PLEG): container finished" podID="6676084e-213f-4c59-ab0e-9390022fe860" containerID="af8fbc61801a825d3eb646baf5203637477afb2b2d0363c9f5f326912bc3434b" exitCode=0 Dec 01 02:59:22 crc kubenswrapper[4880]: I1201 02:59:22.984577 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6676084e-213f-4c59-ab0e-9390022fe860","Type":"ContainerDied","Data":"af8fbc61801a825d3eb646baf5203637477afb2b2d0363c9f5f326912bc3434b"} Dec 01 02:59:24 crc kubenswrapper[4880]: I1201 02:59:24.314541 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:59:24 crc kubenswrapper[4880]: I1201 02:59:24.407487 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6676084e-213f-4c59-ab0e-9390022fe860-kubelet-dir\") pod \"6676084e-213f-4c59-ab0e-9390022fe860\" (UID: \"6676084e-213f-4c59-ab0e-9390022fe860\") " Dec 01 02:59:24 crc kubenswrapper[4880]: I1201 02:59:24.407586 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6676084e-213f-4c59-ab0e-9390022fe860-kube-api-access\") pod \"6676084e-213f-4c59-ab0e-9390022fe860\" (UID: \"6676084e-213f-4c59-ab0e-9390022fe860\") " Dec 01 02:59:24 crc kubenswrapper[4880]: I1201 02:59:24.407597 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6676084e-213f-4c59-ab0e-9390022fe860-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6676084e-213f-4c59-ab0e-9390022fe860" (UID: "6676084e-213f-4c59-ab0e-9390022fe860"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 02:59:24 crc kubenswrapper[4880]: I1201 02:59:24.407833 4880 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6676084e-213f-4c59-ab0e-9390022fe860-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:24 crc kubenswrapper[4880]: I1201 02:59:24.413407 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6676084e-213f-4c59-ab0e-9390022fe860-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6676084e-213f-4c59-ab0e-9390022fe860" (UID: "6676084e-213f-4c59-ab0e-9390022fe860"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:59:24 crc kubenswrapper[4880]: I1201 02:59:24.508461 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6676084e-213f-4c59-ab0e-9390022fe860-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.003257 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cxsv" event={"ID":"164c787a-f422-44ea-9cac-99166ce43f0b","Type":"ContainerStarted","Data":"cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf"} Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.005901 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6676084e-213f-4c59-ab0e-9390022fe860","Type":"ContainerDied","Data":"34941629cdb5d191124f2acec0dd504442dbc794cf38b236694ad74baa8d7dc6"} Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.005939 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34941629cdb5d191124f2acec0dd504442dbc794cf38b236694ad74baa8d7dc6" Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.006004 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.014949 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8d88" event={"ID":"55a2d619-2948-4157-a5ab-a2e2a9247cc2","Type":"ContainerStarted","Data":"d54b7e9c5d0e6ea15b73349b8bc2575bcad7414a08e309a6ab318d413e3309de"} Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.016847 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rstnz" event={"ID":"07ddb3cc-a464-4645-8b0c-7475a9b75330","Type":"ContainerStarted","Data":"c1dd4528ddd5ea4cc92e76ea58973db3113ee94b77886047a8956eb6dbd6479c"} Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.019755 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55sbb" event={"ID":"123479b4-c08d-4081-8f32-3c0609583ed6","Type":"ContainerStarted","Data":"d803e7fb9b8e22a8535b0b5ee8e3f8a5f4c3c142bde337d7ebc6ded3c8adb8d8"} Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.028339 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4cxsv" podStartSLOduration=3.345310717 podStartE2EDuration="39.02832746s" podCreationTimestamp="2025-12-01 02:58:46 +0000 UTC" firstStartedPulling="2025-12-01 02:58:48.573952657 +0000 UTC m=+158.085207029" lastFinishedPulling="2025-12-01 02:59:24.2569694 +0000 UTC m=+193.768223772" observedRunningTime="2025-12-01 02:59:25.028010633 +0000 UTC m=+194.539265005" watchObservedRunningTime="2025-12-01 02:59:25.02832746 +0000 UTC m=+194.539581832" Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.044233 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-55sbb" podStartSLOduration=3.593082393 podStartE2EDuration="36.04421566s" podCreationTimestamp="2025-12-01 02:58:49 +0000 UTC" firstStartedPulling="2025-12-01 02:58:51.685484554 +0000 UTC m=+161.196738926" lastFinishedPulling="2025-12-01 02:59:24.136617821 +0000 UTC m=+193.647872193" observedRunningTime="2025-12-01 02:59:25.043580065 +0000 UTC m=+194.554834437" watchObservedRunningTime="2025-12-01 02:59:25.04421566 +0000 UTC m=+194.555470032" Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.068492 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q8d88" podStartSLOduration=4.423308572 podStartE2EDuration="36.068475234s" podCreationTimestamp="2025-12-01 02:58:49 +0000 UTC" firstStartedPulling="2025-12-01 02:58:52.709200103 +0000 UTC m=+162.220454475" lastFinishedPulling="2025-12-01 02:59:24.354366765 +0000 UTC m=+193.865621137" observedRunningTime="2025-12-01 02:59:25.065882894 +0000 UTC m=+194.577137266" watchObservedRunningTime="2025-12-01 02:59:25.068475234 +0000 UTC m=+194.579729606" Dec 01 02:59:25 crc kubenswrapper[4880]: I1201 02:59:25.096072 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rstnz" podStartSLOduration=2.5523104500000002 podStartE2EDuration="37.096055516s" podCreationTimestamp="2025-12-01 02:58:48 +0000 UTC" firstStartedPulling="2025-12-01 02:58:49.609396629 +0000 UTC m=+159.120651001" lastFinishedPulling="2025-12-01 02:59:24.153141695 +0000 UTC m=+193.664396067" observedRunningTime="2025-12-01 02:59:25.091989791 +0000 UTC m=+194.603244163" watchObservedRunningTime="2025-12-01 02:59:25.096055516 +0000 UTC m=+194.607309888" Dec 01 02:59:27 crc kubenswrapper[4880]: I1201 02:59:27.136217 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:59:27 crc kubenswrapper[4880]: I1201 02:59:27.136475 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:59:27 crc kubenswrapper[4880]: I1201 02:59:27.213094 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:59:27 crc kubenswrapper[4880]: I1201 02:59:27.473039 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgdhn"] Dec 01 02:59:28 crc kubenswrapper[4880]: I1201 02:59:28.715956 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:59:28 crc kubenswrapper[4880]: I1201 02:59:28.716275 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:59:28 crc kubenswrapper[4880]: I1201 02:59:28.749175 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.075888 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.629967 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 02:59:29 crc kubenswrapper[4880]: E1201 02:59:29.630408 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4ff07d-41a9-43e9-884e-afd4093d198f" containerName="pruner" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.630429 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4ff07d-41a9-43e9-884e-afd4093d198f" containerName="pruner" Dec 01 02:59:29 crc kubenswrapper[4880]: E1201 02:59:29.630447 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6676084e-213f-4c59-ab0e-9390022fe860" containerName="pruner" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.630455 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6676084e-213f-4c59-ab0e-9390022fe860" containerName="pruner" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.630618 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="6676084e-213f-4c59-ab0e-9390022fe860" containerName="pruner" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.630642 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4ff07d-41a9-43e9-884e-afd4093d198f" containerName="pruner" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.631169 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.643051 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.643136 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.653588 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.728361 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.728419 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.766527 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.766595 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.867598 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.867661 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.867789 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.889816 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:29 crc kubenswrapper[4880]: I1201 02:59:29.950477 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:30 crc kubenswrapper[4880]: I1201 02:59:30.170436 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:59:30 crc kubenswrapper[4880]: I1201 02:59:30.170861 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:59:30 crc kubenswrapper[4880]: I1201 02:59:30.417802 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 02:59:30 crc kubenswrapper[4880]: I1201 02:59:30.764358 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q8d88" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerName="registry-server" probeResult="failure" output=< Dec 01 02:59:30 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 02:59:30 crc kubenswrapper[4880]: > Dec 01 02:59:31 crc kubenswrapper[4880]: I1201 02:59:31.049773 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f","Type":"ContainerStarted","Data":"fa61c291da536d9f578bc1e52e6087f3ecb216275474d27ff2693006c793de08"} Dec 01 02:59:31 crc kubenswrapper[4880]: I1201 02:59:31.224372 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-55sbb" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" containerName="registry-server" probeResult="failure" output=< Dec 01 02:59:31 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 02:59:31 crc kubenswrapper[4880]: > Dec 01 02:59:32 crc kubenswrapper[4880]: I1201 02:59:32.056281 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f","Type":"ContainerStarted","Data":"01661a2f6dd7654ebc290dca81060d7b36a33bdad3b0662efd7ddfa48f109d8e"} Dec 01 02:59:32 crc kubenswrapper[4880]: I1201 02:59:32.077842 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.077821348 podStartE2EDuration="3.077821348s" podCreationTimestamp="2025-12-01 02:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:59:32.074467811 +0000 UTC m=+201.585722203" watchObservedRunningTime="2025-12-01 02:59:32.077821348 +0000 UTC m=+201.589075720" Dec 01 02:59:33 crc kubenswrapper[4880]: I1201 02:59:33.063647 4880 generic.go:334] "Generic (PLEG): container finished" podID="6ce9bf6b-5c01-409b-8267-98e00cd9fb8f" containerID="01661a2f6dd7654ebc290dca81060d7b36a33bdad3b0662efd7ddfa48f109d8e" exitCode=0 Dec 01 02:59:33 crc kubenswrapper[4880]: I1201 02:59:33.063711 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f","Type":"ContainerDied","Data":"01661a2f6dd7654ebc290dca81060d7b36a33bdad3b0662efd7ddfa48f109d8e"} Dec 01 02:59:34 crc kubenswrapper[4880]: I1201 02:59:34.280797 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:34 crc kubenswrapper[4880]: I1201 02:59:34.433967 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kubelet-dir\") pod \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\" (UID: \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\") " Dec 01 02:59:34 crc kubenswrapper[4880]: I1201 02:59:34.434086 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6ce9bf6b-5c01-409b-8267-98e00cd9fb8f" (UID: "6ce9bf6b-5c01-409b-8267-98e00cd9fb8f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 02:59:34 crc kubenswrapper[4880]: I1201 02:59:34.434760 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kube-api-access\") pod \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\" (UID: \"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f\") " Dec 01 02:59:34 crc kubenswrapper[4880]: I1201 02:59:34.435895 4880 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:34 crc kubenswrapper[4880]: I1201 02:59:34.440351 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6ce9bf6b-5c01-409b-8267-98e00cd9fb8f" (UID: "6ce9bf6b-5c01-409b-8267-98e00cd9fb8f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:59:34 crc kubenswrapper[4880]: I1201 02:59:34.537224 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ce9bf6b-5c01-409b-8267-98e00cd9fb8f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:35 crc kubenswrapper[4880]: I1201 02:59:35.077109 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6ce9bf6b-5c01-409b-8267-98e00cd9fb8f","Type":"ContainerDied","Data":"fa61c291da536d9f578bc1e52e6087f3ecb216275474d27ff2693006c793de08"} Dec 01 02:59:35 crc kubenswrapper[4880]: I1201 02:59:35.077942 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa61c291da536d9f578bc1e52e6087f3ecb216275474d27ff2693006c793de08" Dec 01 02:59:35 crc kubenswrapper[4880]: I1201 02:59:35.077198 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 02:59:36 crc kubenswrapper[4880]: I1201 02:59:36.083578 4880 generic.go:334] "Generic (PLEG): container finished" podID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerID="e8057399eab846d599ad55383b8455aaece2ecd0ebe135239dd7aa022924dd22" exitCode=0 Dec 01 02:59:36 crc kubenswrapper[4880]: I1201 02:59:36.083709 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8wd4" event={"ID":"d076e21e-9946-4ee4-9953-7c0a3830c0fc","Type":"ContainerDied","Data":"e8057399eab846d599ad55383b8455aaece2ecd0ebe135239dd7aa022924dd22"} Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.089441 4880 generic.go:334] "Generic (PLEG): container finished" podID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerID="04add8a6af2bfc9f474692432e226ac99e1b45b703e0ebe443c69fc6a99b8bd6" exitCode=0 Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.089497 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qhf" event={"ID":"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4","Type":"ContainerDied","Data":"04add8a6af2bfc9f474692432e226ac99e1b45b703e0ebe443c69fc6a99b8bd6"} Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.092086 4880 generic.go:334] "Generic (PLEG): container finished" podID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerID="00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572" exitCode=0 Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.092182 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fq8c" event={"ID":"4838ff1a-deaf-4970-bf9b-acc638e7aadc","Type":"ContainerDied","Data":"00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572"} Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.095340 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8wd4" event={"ID":"d076e21e-9946-4ee4-9953-7c0a3830c0fc","Type":"ContainerStarted","Data":"34e67f1da4fcd2941fae6824f59ba6604f5ddb1fd213e09a43a959563be8a2eb"} Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.153627 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j8wd4" podStartSLOduration=3.215772735 podStartE2EDuration="51.153609426s" podCreationTimestamp="2025-12-01 02:58:46 +0000 UTC" firstStartedPulling="2025-12-01 02:58:48.574809397 +0000 UTC m=+158.086063779" lastFinishedPulling="2025-12-01 02:59:36.512646088 +0000 UTC m=+206.023900470" observedRunningTime="2025-12-01 02:59:37.147903594 +0000 UTC m=+206.659157966" watchObservedRunningTime="2025-12-01 02:59:37.153609426 +0000 UTC m=+206.664863798" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.176803 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.420969 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 02:59:37 crc kubenswrapper[4880]: E1201 02:59:37.421163 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce9bf6b-5c01-409b-8267-98e00cd9fb8f" containerName="pruner" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.421174 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce9bf6b-5c01-409b-8267-98e00cd9fb8f" containerName="pruner" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.421284 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce9bf6b-5c01-409b-8267-98e00cd9fb8f" containerName="pruner" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.421622 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.423105 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.424058 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.428385 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.576685 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ae59261-9c95-4ea2-bab8-29e4dff81623-kube-api-access\") pod \"installer-9-crc\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.576738 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.576774 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-var-lock\") pod \"installer-9-crc\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.677989 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.678343 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-var-lock\") pod \"installer-9-crc\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.678121 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.678397 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ae59261-9c95-4ea2-bab8-29e4dff81623-kube-api-access\") pod \"installer-9-crc\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.678517 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-var-lock\") pod \"installer-9-crc\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.696291 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ae59261-9c95-4ea2-bab8-29e4dff81623-kube-api-access\") pod \"installer-9-crc\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:37 crc kubenswrapper[4880]: I1201 02:59:37.749022 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 02:59:38 crc kubenswrapper[4880]: I1201 02:59:38.101448 4880 generic.go:334] "Generic (PLEG): container finished" podID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerID="1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416" exitCode=0 Dec 01 02:59:38 crc kubenswrapper[4880]: I1201 02:59:38.101544 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv65n" event={"ID":"058f5e4b-8c67-4ac4-ba76-857541f70949","Type":"ContainerDied","Data":"1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416"} Dec 01 02:59:38 crc kubenswrapper[4880]: I1201 02:59:38.104312 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qhf" event={"ID":"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4","Type":"ContainerStarted","Data":"fb5ca190f3da9b5a89bcf7b660cc6cf298ef2e0facc4216f82f45c523b7d8de3"} Dec 01 02:59:38 crc kubenswrapper[4880]: I1201 02:59:38.107153 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fq8c" event={"ID":"4838ff1a-deaf-4970-bf9b-acc638e7aadc","Type":"ContainerStarted","Data":"8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a"} Dec 01 02:59:38 crc kubenswrapper[4880]: I1201 02:59:38.145212 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8fq8c" podStartSLOduration=2.283984132 podStartE2EDuration="50.145190823s" podCreationTimestamp="2025-12-01 02:58:48 +0000 UTC" firstStartedPulling="2025-12-01 02:58:49.637106134 +0000 UTC m=+159.148360506" lastFinishedPulling="2025-12-01 02:59:37.498312825 +0000 UTC m=+207.009567197" observedRunningTime="2025-12-01 02:59:38.141010289 +0000 UTC m=+207.652264661" watchObservedRunningTime="2025-12-01 02:59:38.145190823 +0000 UTC m=+207.656445195" Dec 01 02:59:38 crc kubenswrapper[4880]: I1201 02:59:38.158536 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65qhf" podStartSLOduration=3.014640127 podStartE2EDuration="52.158517779s" podCreationTimestamp="2025-12-01 02:58:46 +0000 UTC" firstStartedPulling="2025-12-01 02:58:48.596821589 +0000 UTC m=+158.108075971" lastFinishedPulling="2025-12-01 02:59:37.740699251 +0000 UTC m=+207.251953623" observedRunningTime="2025-12-01 02:59:38.155770792 +0000 UTC m=+207.667025174" watchObservedRunningTime="2025-12-01 02:59:38.158517779 +0000 UTC m=+207.669772151" Dec 01 02:59:38 crc kubenswrapper[4880]: I1201 02:59:38.190103 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 02:59:38 crc kubenswrapper[4880]: W1201 02:59:38.195711 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4ae59261_9c95_4ea2_bab8_29e4dff81623.slice/crio-72e315fef94357eb4cacf7b9e7b906bf6087a70be1f869939ee0637e1ea0d67a WatchSource:0}: Error finding container 72e315fef94357eb4cacf7b9e7b906bf6087a70be1f869939ee0637e1ea0d67a: Status 404 returned error can't find the container with id 72e315fef94357eb4cacf7b9e7b906bf6087a70be1f869939ee0637e1ea0d67a Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.096541 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.096956 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.113591 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv65n" event={"ID":"058f5e4b-8c67-4ac4-ba76-857541f70949","Type":"ContainerStarted","Data":"1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c"} Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.114700 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4ae59261-9c95-4ea2-bab8-29e4dff81623","Type":"ContainerStarted","Data":"79bcc5672cef1c48ea0b8ceb58c5f9ce223a072cb1e3d3fb0f495bf67f5b3011"} Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.114747 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4ae59261-9c95-4ea2-bab8-29e4dff81623","Type":"ContainerStarted","Data":"72e315fef94357eb4cacf7b9e7b906bf6087a70be1f869939ee0637e1ea0d67a"} Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.142190 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.147201 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.147192454 podStartE2EDuration="2.147192454s" podCreationTimestamp="2025-12-01 02:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 02:59:39.146243844 +0000 UTC m=+208.657498216" watchObservedRunningTime="2025-12-01 02:59:39.147192454 +0000 UTC m=+208.658446826" Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.148174 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dv65n" podStartSLOduration=2.997673105 podStartE2EDuration="53.148169505s" podCreationTimestamp="2025-12-01 02:58:46 +0000 UTC" firstStartedPulling="2025-12-01 02:58:48.588819343 +0000 UTC m=+158.100073725" lastFinishedPulling="2025-12-01 02:59:38.739315753 +0000 UTC m=+208.250570125" observedRunningTime="2025-12-01 02:59:39.133684232 +0000 UTC m=+208.644938604" watchObservedRunningTime="2025-12-01 02:59:39.148169505 +0000 UTC m=+208.659423877" Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.525961 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4cxsv"] Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.526580 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4cxsv" podUID="164c787a-f422-44ea-9cac-99166ce43f0b" containerName="registry-server" containerID="cri-o://cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf" gracePeriod=2 Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.784857 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:59:39 crc kubenswrapper[4880]: I1201 02:59:39.898979 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.010207 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.104322 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6x27\" (UniqueName: \"kubernetes.io/projected/164c787a-f422-44ea-9cac-99166ce43f0b-kube-api-access-n6x27\") pod \"164c787a-f422-44ea-9cac-99166ce43f0b\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.104390 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-utilities\") pod \"164c787a-f422-44ea-9cac-99166ce43f0b\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.104444 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-catalog-content\") pod \"164c787a-f422-44ea-9cac-99166ce43f0b\" (UID: \"164c787a-f422-44ea-9cac-99166ce43f0b\") " Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.105629 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-utilities" (OuterVolumeSpecName: "utilities") pod "164c787a-f422-44ea-9cac-99166ce43f0b" (UID: "164c787a-f422-44ea-9cac-99166ce43f0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.115640 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164c787a-f422-44ea-9cac-99166ce43f0b-kube-api-access-n6x27" (OuterVolumeSpecName: "kube-api-access-n6x27") pod "164c787a-f422-44ea-9cac-99166ce43f0b" (UID: "164c787a-f422-44ea-9cac-99166ce43f0b"). InnerVolumeSpecName "kube-api-access-n6x27". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.119561 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.119601 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6x27\" (UniqueName: \"kubernetes.io/projected/164c787a-f422-44ea-9cac-99166ce43f0b-kube-api-access-n6x27\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.125284 4880 generic.go:334] "Generic (PLEG): container finished" podID="164c787a-f422-44ea-9cac-99166ce43f0b" containerID="cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf" exitCode=0 Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.125366 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4cxsv" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.125396 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cxsv" event={"ID":"164c787a-f422-44ea-9cac-99166ce43f0b","Type":"ContainerDied","Data":"cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf"} Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.125648 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4cxsv" event={"ID":"164c787a-f422-44ea-9cac-99166ce43f0b","Type":"ContainerDied","Data":"603e8ce4af9229d9afad199ea9a0bbb0468979d16f9338af539852a1b1a4f8cb"} Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.125724 4880 scope.go:117] "RemoveContainer" containerID="cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.150475 4880 scope.go:117] "RemoveContainer" containerID="2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.170243 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "164c787a-f422-44ea-9cac-99166ce43f0b" (UID: "164c787a-f422-44ea-9cac-99166ce43f0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.186204 4880 scope.go:117] "RemoveContainer" containerID="56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.214563 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.220509 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164c787a-f422-44ea-9cac-99166ce43f0b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.221594 4880 scope.go:117] "RemoveContainer" containerID="cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf" Dec 01 02:59:40 crc kubenswrapper[4880]: E1201 02:59:40.222351 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf\": container with ID starting with cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf not found: ID does not exist" containerID="cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.222386 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf"} err="failed to get container status \"cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf\": rpc error: code = NotFound desc = could not find container \"cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf\": container with ID starting with cb46ed43828d9461b7045774fd7b141221e3c3216aee9cda245202072cd60faf not found: ID does not exist" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.222425 4880 scope.go:117] "RemoveContainer" containerID="2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8" Dec 01 02:59:40 crc kubenswrapper[4880]: E1201 02:59:40.230596 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8\": container with ID starting with 2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8 not found: ID does not exist" containerID="2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.230661 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8"} err="failed to get container status \"2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8\": rpc error: code = NotFound desc = could not find container \"2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8\": container with ID starting with 2eb3623d1eb08e8a99634b118aee7e38284a3aa1b27d20c9b5a1dff3bfddbbb8 not found: ID does not exist" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.230697 4880 scope.go:117] "RemoveContainer" containerID="56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66" Dec 01 02:59:40 crc kubenswrapper[4880]: E1201 02:59:40.233919 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66\": container with ID starting with 56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66 not found: ID does not exist" containerID="56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.233975 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66"} err="failed to get container status \"56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66\": rpc error: code = NotFound desc = could not find container \"56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66\": container with ID starting with 56db9bee14ab2f18c22edf6bb1bcb1adfd1ebd35b93e758ea87b7b0b1cff9c66 not found: ID does not exist" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.257232 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.455135 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4cxsv"] Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.461774 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4cxsv"] Dec 01 02:59:40 crc kubenswrapper[4880]: I1201 02:59:40.795983 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="164c787a-f422-44ea-9cac-99166ce43f0b" path="/var/lib/kubelet/pods/164c787a-f422-44ea-9cac-99166ce43f0b/volumes" Dec 01 02:59:43 crc kubenswrapper[4880]: I1201 02:59:43.928789 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55sbb"] Dec 01 02:59:43 crc kubenswrapper[4880]: I1201 02:59:43.929936 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-55sbb" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" containerName="registry-server" containerID="cri-o://d803e7fb9b8e22a8535b0b5ee8e3f8a5f4c3c142bde337d7ebc6ded3c8adb8d8" gracePeriod=2 Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.157090 4880 generic.go:334] "Generic (PLEG): container finished" podID="123479b4-c08d-4081-8f32-3c0609583ed6" containerID="d803e7fb9b8e22a8535b0b5ee8e3f8a5f4c3c142bde337d7ebc6ded3c8adb8d8" exitCode=0 Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.157130 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55sbb" event={"ID":"123479b4-c08d-4081-8f32-3c0609583ed6","Type":"ContainerDied","Data":"d803e7fb9b8e22a8535b0b5ee8e3f8a5f4c3c142bde337d7ebc6ded3c8adb8d8"} Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.320357 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.380029 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-catalog-content\") pod \"123479b4-c08d-4081-8f32-3c0609583ed6\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.380460 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-utilities\") pod \"123479b4-c08d-4081-8f32-3c0609583ed6\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.380546 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-895ns\" (UniqueName: \"kubernetes.io/projected/123479b4-c08d-4081-8f32-3c0609583ed6-kube-api-access-895ns\") pod \"123479b4-c08d-4081-8f32-3c0609583ed6\" (UID: \"123479b4-c08d-4081-8f32-3c0609583ed6\") " Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.381691 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-utilities" (OuterVolumeSpecName: "utilities") pod "123479b4-c08d-4081-8f32-3c0609583ed6" (UID: "123479b4-c08d-4081-8f32-3c0609583ed6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.390665 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123479b4-c08d-4081-8f32-3c0609583ed6-kube-api-access-895ns" (OuterVolumeSpecName: "kube-api-access-895ns") pod "123479b4-c08d-4081-8f32-3c0609583ed6" (UID: "123479b4-c08d-4081-8f32-3c0609583ed6"). InnerVolumeSpecName "kube-api-access-895ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.481834 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.481912 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-895ns\" (UniqueName: \"kubernetes.io/projected/123479b4-c08d-4081-8f32-3c0609583ed6-kube-api-access-895ns\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.508921 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "123479b4-c08d-4081-8f32-3c0609583ed6" (UID: "123479b4-c08d-4081-8f32-3c0609583ed6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:59:44 crc kubenswrapper[4880]: I1201 02:59:44.582915 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/123479b4-c08d-4081-8f32-3c0609583ed6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:45 crc kubenswrapper[4880]: I1201 02:59:45.165423 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55sbb" event={"ID":"123479b4-c08d-4081-8f32-3c0609583ed6","Type":"ContainerDied","Data":"b27880daac6cde12f20d2dc3caf88d3439aa893a035d92130c89401834044e83"} Dec 01 02:59:45 crc kubenswrapper[4880]: I1201 02:59:45.165481 4880 scope.go:117] "RemoveContainer" containerID="d803e7fb9b8e22a8535b0b5ee8e3f8a5f4c3c142bde337d7ebc6ded3c8adb8d8" Dec 01 02:59:45 crc kubenswrapper[4880]: I1201 02:59:45.165492 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55sbb" Dec 01 02:59:45 crc kubenswrapper[4880]: I1201 02:59:45.187365 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55sbb"] Dec 01 02:59:45 crc kubenswrapper[4880]: I1201 02:59:45.192403 4880 scope.go:117] "RemoveContainer" containerID="7b80cc2ab4ce5509f2ae0e9ca4bc945cb2bdcf1b4a13b6c0a02d3c1eebef7800" Dec 01 02:59:45 crc kubenswrapper[4880]: I1201 02:59:45.196860 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-55sbb"] Dec 01 02:59:45 crc kubenswrapper[4880]: I1201 02:59:45.215175 4880 scope.go:117] "RemoveContainer" containerID="a8b4699ef27590421cdc4883df4d3c538cb9d66c0727f39f62a46962be1060d9" Dec 01 02:59:46 crc kubenswrapper[4880]: I1201 02:59:46.505851 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:59:46 crc kubenswrapper[4880]: I1201 02:59:46.505928 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:59:46 crc kubenswrapper[4880]: I1201 02:59:46.577322 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:59:46 crc kubenswrapper[4880]: I1201 02:59:46.719284 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:59:46 crc kubenswrapper[4880]: I1201 02:59:46.720061 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:59:46 crc kubenswrapper[4880]: I1201 02:59:46.780303 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:59:46 crc kubenswrapper[4880]: I1201 02:59:46.799131 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" path="/var/lib/kubelet/pods/123479b4-c08d-4081-8f32-3c0609583ed6/volumes" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.003505 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.003836 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.047578 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.231484 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.244838 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j8wd4" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.249424 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65qhf" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.369193 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.369255 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.369306 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.369890 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 02:59:47 crc kubenswrapper[4880]: I1201 02:59:47.369959 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37" gracePeriod=600 Dec 01 02:59:48 crc kubenswrapper[4880]: I1201 02:59:48.189354 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37" exitCode=0 Dec 01 02:59:48 crc kubenswrapper[4880]: I1201 02:59:48.189519 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37"} Dec 01 02:59:48 crc kubenswrapper[4880]: I1201 02:59:48.190635 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"1a2f470a436bc91169f368a1c915cb04ff7c639b62bb02c7613be50a7734fc88"} Dec 01 02:59:48 crc kubenswrapper[4880]: I1201 02:59:48.930034 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dv65n"] Dec 01 02:59:49 crc kubenswrapper[4880]: I1201 02:59:49.142723 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.201311 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dv65n" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerName="registry-server" containerID="cri-o://1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c" gracePeriod=2 Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.640797 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.782064 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-catalog-content\") pod \"058f5e4b-8c67-4ac4-ba76-857541f70949\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.782139 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-utilities\") pod \"058f5e4b-8c67-4ac4-ba76-857541f70949\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.782165 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snzb5\" (UniqueName: \"kubernetes.io/projected/058f5e4b-8c67-4ac4-ba76-857541f70949-kube-api-access-snzb5\") pod \"058f5e4b-8c67-4ac4-ba76-857541f70949\" (UID: \"058f5e4b-8c67-4ac4-ba76-857541f70949\") " Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.784177 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-utilities" (OuterVolumeSpecName: "utilities") pod "058f5e4b-8c67-4ac4-ba76-857541f70949" (UID: "058f5e4b-8c67-4ac4-ba76-857541f70949"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.799071 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058f5e4b-8c67-4ac4-ba76-857541f70949-kube-api-access-snzb5" (OuterVolumeSpecName: "kube-api-access-snzb5") pod "058f5e4b-8c67-4ac4-ba76-857541f70949" (UID: "058f5e4b-8c67-4ac4-ba76-857541f70949"). InnerVolumeSpecName "kube-api-access-snzb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.834949 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "058f5e4b-8c67-4ac4-ba76-857541f70949" (UID: "058f5e4b-8c67-4ac4-ba76-857541f70949"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.883334 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.883368 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snzb5\" (UniqueName: \"kubernetes.io/projected/058f5e4b-8c67-4ac4-ba76-857541f70949-kube-api-access-snzb5\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:50 crc kubenswrapper[4880]: I1201 02:59:50.883378 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/058f5e4b-8c67-4ac4-ba76-857541f70949-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.207582 4880 generic.go:334] "Generic (PLEG): container finished" podID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerID="1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c" exitCode=0 Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.207631 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv65n" event={"ID":"058f5e4b-8c67-4ac4-ba76-857541f70949","Type":"ContainerDied","Data":"1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c"} Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.207649 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dv65n" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.207670 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dv65n" event={"ID":"058f5e4b-8c67-4ac4-ba76-857541f70949","Type":"ContainerDied","Data":"15dc8927a560f079f7dd8958031037b02267acb5f33bb588758e031948fd67c8"} Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.207692 4880 scope.go:117] "RemoveContainer" containerID="1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.220043 4880 scope.go:117] "RemoveContainer" containerID="1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.230254 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dv65n"] Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.234056 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dv65n"] Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.234158 4880 scope.go:117] "RemoveContainer" containerID="c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.260376 4880 scope.go:117] "RemoveContainer" containerID="1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c" Dec 01 02:59:51 crc kubenswrapper[4880]: E1201 02:59:51.260936 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c\": container with ID starting with 1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c not found: ID does not exist" containerID="1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.260974 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c"} err="failed to get container status \"1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c\": rpc error: code = NotFound desc = could not find container \"1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c\": container with ID starting with 1307a7aac90e8d0a9038d8267f2b34cbeeb5b0028de048bc4f2acd66a4caf28c not found: ID does not exist" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.260999 4880 scope.go:117] "RemoveContainer" containerID="1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416" Dec 01 02:59:51 crc kubenswrapper[4880]: E1201 02:59:51.261275 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416\": container with ID starting with 1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416 not found: ID does not exist" containerID="1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.261303 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416"} err="failed to get container status \"1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416\": rpc error: code = NotFound desc = could not find container \"1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416\": container with ID starting with 1a416bd2432d646d1fde3fb8e799736c30dbef5da1bbadd9b27e5db4f10e0416 not found: ID does not exist" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.261324 4880 scope.go:117] "RemoveContainer" containerID="c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64" Dec 01 02:59:51 crc kubenswrapper[4880]: E1201 02:59:51.261537 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64\": container with ID starting with c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64 not found: ID does not exist" containerID="c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.261557 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64"} err="failed to get container status \"c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64\": rpc error: code = NotFound desc = could not find container \"c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64\": container with ID starting with c48352f583843f814503bba56314d768db89b10dd0b2c54817dfc8f10fd1fd64 not found: ID does not exist" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.319574 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fq8c"] Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.319753 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8fq8c" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerName="registry-server" containerID="cri-o://8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a" gracePeriod=2 Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.738790 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.895833 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-catalog-content\") pod \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.895962 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7nhs\" (UniqueName: \"kubernetes.io/projected/4838ff1a-deaf-4970-bf9b-acc638e7aadc-kube-api-access-l7nhs\") pod \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.896011 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-utilities\") pod \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\" (UID: \"4838ff1a-deaf-4970-bf9b-acc638e7aadc\") " Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.898476 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-utilities" (OuterVolumeSpecName: "utilities") pod "4838ff1a-deaf-4970-bf9b-acc638e7aadc" (UID: "4838ff1a-deaf-4970-bf9b-acc638e7aadc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.900703 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4838ff1a-deaf-4970-bf9b-acc638e7aadc-kube-api-access-l7nhs" (OuterVolumeSpecName: "kube-api-access-l7nhs") pod "4838ff1a-deaf-4970-bf9b-acc638e7aadc" (UID: "4838ff1a-deaf-4970-bf9b-acc638e7aadc"). InnerVolumeSpecName "kube-api-access-l7nhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.931140 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4838ff1a-deaf-4970-bf9b-acc638e7aadc" (UID: "4838ff1a-deaf-4970-bf9b-acc638e7aadc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.998478 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.998582 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7nhs\" (UniqueName: \"kubernetes.io/projected/4838ff1a-deaf-4970-bf9b-acc638e7aadc-kube-api-access-l7nhs\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:51 crc kubenswrapper[4880]: I1201 02:59:51.998610 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4838ff1a-deaf-4970-bf9b-acc638e7aadc-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.220563 4880 generic.go:334] "Generic (PLEG): container finished" podID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerID="8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a" exitCode=0 Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.220622 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fq8c" event={"ID":"4838ff1a-deaf-4970-bf9b-acc638e7aadc","Type":"ContainerDied","Data":"8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a"} Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.220665 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8fq8c" event={"ID":"4838ff1a-deaf-4970-bf9b-acc638e7aadc","Type":"ContainerDied","Data":"6a3656822845560a0010be2eda4c0eb031fb4bb973aa2ea6e92e9b990e0e3f2b"} Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.220691 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8fq8c" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.220696 4880 scope.go:117] "RemoveContainer" containerID="8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.242643 4880 scope.go:117] "RemoveContainer" containerID="00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.285133 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fq8c"] Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.285451 4880 scope.go:117] "RemoveContainer" containerID="fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.293525 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8fq8c"] Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.309929 4880 scope.go:117] "RemoveContainer" containerID="8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a" Dec 01 02:59:52 crc kubenswrapper[4880]: E1201 02:59:52.310403 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a\": container with ID starting with 8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a not found: ID does not exist" containerID="8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.310456 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a"} err="failed to get container status \"8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a\": rpc error: code = NotFound desc = could not find container \"8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a\": container with ID starting with 8d7a374742c1d9d049c43afc4555eacf10e2f98a77c88b65c756ced52b36a90a not found: ID does not exist" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.310488 4880 scope.go:117] "RemoveContainer" containerID="00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572" Dec 01 02:59:52 crc kubenswrapper[4880]: E1201 02:59:52.311222 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572\": container with ID starting with 00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572 not found: ID does not exist" containerID="00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.311261 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572"} err="failed to get container status \"00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572\": rpc error: code = NotFound desc = could not find container \"00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572\": container with ID starting with 00d21d2cfe195192c4967235d1a8e6bb755ee58ed04cebe1f4b86d7d29ba4572 not found: ID does not exist" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.311279 4880 scope.go:117] "RemoveContainer" containerID="fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5" Dec 01 02:59:52 crc kubenswrapper[4880]: E1201 02:59:52.311541 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5\": container with ID starting with fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5 not found: ID does not exist" containerID="fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.311573 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5"} err="failed to get container status \"fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5\": rpc error: code = NotFound desc = could not find container \"fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5\": container with ID starting with fa0a9aa78af2316c2b279faf4618e4c42ccf3c116804366cc2be7abb5f2b8fd5 not found: ID does not exist" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.500745 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" podUID="f36bde77-88b0-46fb-b33d-85c7c430ab11" containerName="oauth-openshift" containerID="cri-o://5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f" gracePeriod=15 Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.794199 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" path="/var/lib/kubelet/pods/058f5e4b-8c67-4ac4-ba76-857541f70949/volumes" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.795328 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" path="/var/lib/kubelet/pods/4838ff1a-deaf-4970-bf9b-acc638e7aadc/volumes" Dec 01 02:59:52 crc kubenswrapper[4880]: I1201 02:59:52.947300 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017374 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-login\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017437 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-session\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017467 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-error\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017492 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-cliconfig\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017514 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-serving-cert\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017539 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-ocp-branding-template\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017566 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-idp-0-file-data\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017595 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-dir\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017618 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-service-ca\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017666 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-trusted-ca-bundle\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017700 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx4z2\" (UniqueName: \"kubernetes.io/projected/f36bde77-88b0-46fb-b33d-85c7c430ab11-kube-api-access-dx4z2\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017721 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-router-certs\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017746 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-provider-selection\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.017774 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-policies\") pod \"f36bde77-88b0-46fb-b33d-85c7c430ab11\" (UID: \"f36bde77-88b0-46fb-b33d-85c7c430ab11\") " Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.018714 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.019728 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.019990 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.020029 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.020528 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.025047 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.027132 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.027697 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.028061 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.029220 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36bde77-88b0-46fb-b33d-85c7c430ab11-kube-api-access-dx4z2" (OuterVolumeSpecName: "kube-api-access-dx4z2") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "kube-api-access-dx4z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.035425 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.036122 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.036560 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.043213 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f36bde77-88b0-46fb-b33d-85c7c430ab11" (UID: "f36bde77-88b0-46fb-b33d-85c7c430ab11"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.118970 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119016 4880 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119036 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119059 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119078 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119096 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx4z2\" (UniqueName: \"kubernetes.io/projected/f36bde77-88b0-46fb-b33d-85c7c430ab11-kube-api-access-dx4z2\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119143 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119174 4880 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119186 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119199 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119209 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119219 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119228 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.119238 4880 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36bde77-88b0-46fb-b33d-85c7c430ab11-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.232222 4880 generic.go:334] "Generic (PLEG): container finished" podID="f36bde77-88b0-46fb-b33d-85c7c430ab11" containerID="5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f" exitCode=0 Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.232271 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" event={"ID":"f36bde77-88b0-46fb-b33d-85c7c430ab11","Type":"ContainerDied","Data":"5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f"} Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.232279 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.232302 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vgdhn" event={"ID":"f36bde77-88b0-46fb-b33d-85c7c430ab11","Type":"ContainerDied","Data":"82168d82327dbcb8d9d17d40796163da7ae099e2664a67e7d3e4a085f54262c0"} Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.232336 4880 scope.go:117] "RemoveContainer" containerID="5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.264263 4880 scope.go:117] "RemoveContainer" containerID="5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f" Dec 01 02:59:53 crc kubenswrapper[4880]: E1201 02:59:53.265197 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f\": container with ID starting with 5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f not found: ID does not exist" containerID="5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.265236 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f"} err="failed to get container status \"5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f\": rpc error: code = NotFound desc = could not find container \"5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f\": container with ID starting with 5e9543e2df1a26478532a351a799e89ce3ce7e3b482d5fc63565261bea3d514f not found: ID does not exist" Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.284480 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgdhn"] Dec 01 02:59:53 crc kubenswrapper[4880]: I1201 02:59:53.289628 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgdhn"] Dec 01 02:59:54 crc kubenswrapper[4880]: I1201 02:59:54.795451 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36bde77-88b0-46fb-b33d-85c7c430ab11" path="/var/lib/kubelet/pods/f36bde77-88b0-46fb-b33d-85c7c430ab11/volumes" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.144475 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm"] Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146495 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerName="extract-content" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146556 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerName="extract-content" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146579 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146593 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146610 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164c787a-f422-44ea-9cac-99166ce43f0b" containerName="extract-utilities" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146623 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="164c787a-f422-44ea-9cac-99166ce43f0b" containerName="extract-utilities" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146639 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerName="extract-utilities" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146652 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerName="extract-utilities" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146669 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerName="extract-utilities" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146681 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerName="extract-utilities" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146698 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" containerName="extract-content" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146713 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" containerName="extract-content" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146729 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164c787a-f422-44ea-9cac-99166ce43f0b" containerName="extract-content" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146743 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="164c787a-f422-44ea-9cac-99166ce43f0b" containerName="extract-content" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146764 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" containerName="extract-utilities" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146778 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" containerName="extract-utilities" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146798 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36bde77-88b0-46fb-b33d-85c7c430ab11" containerName="oauth-openshift" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146810 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36bde77-88b0-46fb-b33d-85c7c430ab11" containerName="oauth-openshift" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146827 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146839 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146860 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerName="extract-content" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146899 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerName="extract-content" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146915 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146928 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: E1201 03:00:00.146945 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164c787a-f422-44ea-9cac-99166ce43f0b" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.146956 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="164c787a-f422-44ea-9cac-99166ce43f0b" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.147117 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="123479b4-c08d-4081-8f32-3c0609583ed6" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.147137 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36bde77-88b0-46fb-b33d-85c7c430ab11" containerName="oauth-openshift" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.147152 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="4838ff1a-deaf-4970-bf9b-acc638e7aadc" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.147172 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="164c787a-f422-44ea-9cac-99166ce43f0b" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.147189 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="058f5e4b-8c67-4ac4-ba76-857541f70949" containerName="registry-server" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.147789 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.150521 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.150984 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.167536 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm"] Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.328282 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4902a7aa-8767-491f-87d4-d90e98d0e700-config-volume\") pod \"collect-profiles-29409300-4f7vm\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.328641 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhnnx\" (UniqueName: \"kubernetes.io/projected/4902a7aa-8767-491f-87d4-d90e98d0e700-kube-api-access-vhnnx\") pod \"collect-profiles-29409300-4f7vm\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.328750 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4902a7aa-8767-491f-87d4-d90e98d0e700-secret-volume\") pod \"collect-profiles-29409300-4f7vm\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.431180 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhnnx\" (UniqueName: \"kubernetes.io/projected/4902a7aa-8767-491f-87d4-d90e98d0e700-kube-api-access-vhnnx\") pod \"collect-profiles-29409300-4f7vm\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.431257 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4902a7aa-8767-491f-87d4-d90e98d0e700-secret-volume\") pod \"collect-profiles-29409300-4f7vm\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.431337 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4902a7aa-8767-491f-87d4-d90e98d0e700-config-volume\") pod \"collect-profiles-29409300-4f7vm\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.432398 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4902a7aa-8767-491f-87d4-d90e98d0e700-config-volume\") pod \"collect-profiles-29409300-4f7vm\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.444739 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4902a7aa-8767-491f-87d4-d90e98d0e700-secret-volume\") pod \"collect-profiles-29409300-4f7vm\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.459847 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhnnx\" (UniqueName: \"kubernetes.io/projected/4902a7aa-8767-491f-87d4-d90e98d0e700-kube-api-access-vhnnx\") pod \"collect-profiles-29409300-4f7vm\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.472015 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:00 crc kubenswrapper[4880]: I1201 03:00:00.864567 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm"] Dec 01 03:00:01 crc kubenswrapper[4880]: I1201 03:00:01.287402 4880 generic.go:334] "Generic (PLEG): container finished" podID="4902a7aa-8767-491f-87d4-d90e98d0e700" containerID="775476b0fff2e75fdd10e62f6b8a48e63ad35fedcc7305a83eeccf9b67414469" exitCode=0 Dec 01 03:00:01 crc kubenswrapper[4880]: I1201 03:00:01.287497 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" event={"ID":"4902a7aa-8767-491f-87d4-d90e98d0e700","Type":"ContainerDied","Data":"775476b0fff2e75fdd10e62f6b8a48e63ad35fedcc7305a83eeccf9b67414469"} Dec 01 03:00:01 crc kubenswrapper[4880]: I1201 03:00:01.287755 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" event={"ID":"4902a7aa-8767-491f-87d4-d90e98d0e700","Type":"ContainerStarted","Data":"17985401d106af68f1f007badc36aa2f17dd95a772078b7d6cba755f64a7092e"} Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.601757 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.762489 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhnnx\" (UniqueName: \"kubernetes.io/projected/4902a7aa-8767-491f-87d4-d90e98d0e700-kube-api-access-vhnnx\") pod \"4902a7aa-8767-491f-87d4-d90e98d0e700\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.762612 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4902a7aa-8767-491f-87d4-d90e98d0e700-secret-volume\") pod \"4902a7aa-8767-491f-87d4-d90e98d0e700\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.762775 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4902a7aa-8767-491f-87d4-d90e98d0e700-config-volume\") pod \"4902a7aa-8767-491f-87d4-d90e98d0e700\" (UID: \"4902a7aa-8767-491f-87d4-d90e98d0e700\") " Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.764768 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7d9549f6c-mpcvn"] Dec 01 03:00:02 crc kubenswrapper[4880]: E1201 03:00:02.765327 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4902a7aa-8767-491f-87d4-d90e98d0e700" containerName="collect-profiles" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.765725 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="4902a7aa-8767-491f-87d4-d90e98d0e700" containerName="collect-profiles" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.765286 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4902a7aa-8767-491f-87d4-d90e98d0e700-config-volume" (OuterVolumeSpecName: "config-volume") pod "4902a7aa-8767-491f-87d4-d90e98d0e700" (UID: "4902a7aa-8767-491f-87d4-d90e98d0e700"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.766121 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="4902a7aa-8767-491f-87d4-d90e98d0e700" containerName="collect-profiles" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.766667 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.771888 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4902a7aa-8767-491f-87d4-d90e98d0e700-kube-api-access-vhnnx" (OuterVolumeSpecName: "kube-api-access-vhnnx") pod "4902a7aa-8767-491f-87d4-d90e98d0e700" (UID: "4902a7aa-8767-491f-87d4-d90e98d0e700"). InnerVolumeSpecName "kube-api-access-vhnnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.772126 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4902a7aa-8767-491f-87d4-d90e98d0e700-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4902a7aa-8767-491f-87d4-d90e98d0e700" (UID: "4902a7aa-8767-491f-87d4-d90e98d0e700"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.772933 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.773418 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.773599 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.773780 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.774388 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.775601 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.777149 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.777305 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.777426 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.777824 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.779001 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.779431 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.791777 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d9549f6c-mpcvn"] Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.799775 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.799989 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.816103 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.865763 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cm45\" (UniqueName: \"kubernetes.io/projected/1bce670c-6249-4b17-89fe-dba77e0e94b9-kube-api-access-2cm45\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.865839 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bce670c-6249-4b17-89fe-dba77e0e94b9-audit-dir\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.865913 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-session\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.865945 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.866004 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.866068 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.866192 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.866291 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.867020 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.867117 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.867147 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.867478 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.867530 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-audit-policies\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.867564 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.867650 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhnnx\" (UniqueName: \"kubernetes.io/projected/4902a7aa-8767-491f-87d4-d90e98d0e700-kube-api-access-vhnnx\") on node \"crc\" DevicePath \"\"" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.867666 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4902a7aa-8767-491f-87d4-d90e98d0e700-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.867679 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4902a7aa-8767-491f-87d4-d90e98d0e700-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968525 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968606 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-audit-policies\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968657 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968705 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cm45\" (UniqueName: \"kubernetes.io/projected/1bce670c-6249-4b17-89fe-dba77e0e94b9-kube-api-access-2cm45\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968748 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bce670c-6249-4b17-89fe-dba77e0e94b9-audit-dir\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968776 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-session\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968819 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968859 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968918 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.968958 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.969023 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.969068 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.969133 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.969180 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.971394 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.971675 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.973697 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bce670c-6249-4b17-89fe-dba77e0e94b9-audit-dir\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.974624 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.974818 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.974837 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.976635 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.977150 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.978959 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1bce670c-6249-4b17-89fe-dba77e0e94b9-audit-policies\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.993809 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.995094 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.997219 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-system-session\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:02 crc kubenswrapper[4880]: I1201 03:00:02.998358 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1bce670c-6249-4b17-89fe-dba77e0e94b9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:03 crc kubenswrapper[4880]: I1201 03:00:03.002511 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cm45\" (UniqueName: \"kubernetes.io/projected/1bce670c-6249-4b17-89fe-dba77e0e94b9-kube-api-access-2cm45\") pod \"oauth-openshift-7d9549f6c-mpcvn\" (UID: \"1bce670c-6249-4b17-89fe-dba77e0e94b9\") " pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:03 crc kubenswrapper[4880]: I1201 03:00:03.145031 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:03 crc kubenswrapper[4880]: I1201 03:00:03.303144 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" event={"ID":"4902a7aa-8767-491f-87d4-d90e98d0e700","Type":"ContainerDied","Data":"17985401d106af68f1f007badc36aa2f17dd95a772078b7d6cba755f64a7092e"} Dec 01 03:00:03 crc kubenswrapper[4880]: I1201 03:00:03.303183 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17985401d106af68f1f007badc36aa2f17dd95a772078b7d6cba755f64a7092e" Dec 01 03:00:03 crc kubenswrapper[4880]: I1201 03:00:03.303237 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm" Dec 01 03:00:03 crc kubenswrapper[4880]: I1201 03:00:03.607112 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d9549f6c-mpcvn"] Dec 01 03:00:04 crc kubenswrapper[4880]: I1201 03:00:04.311140 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" event={"ID":"1bce670c-6249-4b17-89fe-dba77e0e94b9","Type":"ContainerStarted","Data":"a82d51fa86706c63c93f5f2f01093c0c53527edfd452b0a4bd1552c2d41c10c7"} Dec 01 03:00:04 crc kubenswrapper[4880]: I1201 03:00:04.311493 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" event={"ID":"1bce670c-6249-4b17-89fe-dba77e0e94b9","Type":"ContainerStarted","Data":"75c747fa03a2990f28be0c60a57773060ac0fcafa8667335b4d889d7d23c2302"} Dec 01 03:00:04 crc kubenswrapper[4880]: I1201 03:00:04.311524 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:04 crc kubenswrapper[4880]: I1201 03:00:04.321082 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" Dec 01 03:00:04 crc kubenswrapper[4880]: I1201 03:00:04.345825 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7d9549f6c-mpcvn" podStartSLOduration=37.345793291 podStartE2EDuration="37.345793291s" podCreationTimestamp="2025-12-01 02:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:00:04.341419501 +0000 UTC m=+233.852673923" watchObservedRunningTime="2025-12-01 03:00:04.345793291 +0000 UTC m=+233.857047713" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.022284 4880 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.023993 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.025180 4880 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.025920 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c" gracePeriod=15 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.026049 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d" gracePeriod=15 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.026135 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df" gracePeriod=15 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.026172 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e" gracePeriod=15 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.025939 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a" gracePeriod=15 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.030612 4880 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 03:00:16 crc kubenswrapper[4880]: E1201 03:00:16.031017 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031048 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 03:00:16 crc kubenswrapper[4880]: E1201 03:00:16.031069 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031081 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 03:00:16 crc kubenswrapper[4880]: E1201 03:00:16.031096 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031109 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 03:00:16 crc kubenswrapper[4880]: E1201 03:00:16.031124 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031203 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 03:00:16 crc kubenswrapper[4880]: E1201 03:00:16.031223 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031234 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 03:00:16 crc kubenswrapper[4880]: E1201 03:00:16.031258 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031270 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031452 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031470 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031488 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031503 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031519 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031535 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 03:00:16 crc kubenswrapper[4880]: E1201 03:00:16.031706 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.031722 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.093014 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.168964 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.169015 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.169041 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.169057 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.169070 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.169089 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.169101 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.169114 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270108 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270172 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270190 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270205 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270208 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270226 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270241 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270256 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270258 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270278 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270308 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270325 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270333 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270353 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270384 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.270397 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.382083 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.401523 4880 generic.go:334] "Generic (PLEG): container finished" podID="4ae59261-9c95-4ea2-bab8-29e4dff81623" containerID="79bcc5672cef1c48ea0b8ceb58c5f9ce223a072cb1e3d3fb0f495bf67f5b3011" exitCode=0 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.401600 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4ae59261-9c95-4ea2-bab8-29e4dff81623","Type":"ContainerDied","Data":"79bcc5672cef1c48ea0b8ceb58c5f9ce223a072cb1e3d3fb0f495bf67f5b3011"} Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.402429 4880 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.402911 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.403261 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.405921 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.407783 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.409421 4880 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a" exitCode=0 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.409459 4880 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e" exitCode=0 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.409479 4880 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d" exitCode=0 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.409493 4880 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df" exitCode=2 Dec 01 03:00:16 crc kubenswrapper[4880]: I1201 03:00:16.409562 4880 scope.go:117] "RemoveContainer" containerID="bbd41fb8ed197a07f2a2884289e2d0705b2c0de265c71443629872edd83858f4" Dec 01 03:00:16 crc kubenswrapper[4880]: W1201 03:00:16.429145 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-981850e98a915412a2f83024cc7c2c969a8052c40715ca7ef19cda2dff0a0e57 WatchSource:0}: Error finding container 981850e98a915412a2f83024cc7c2c969a8052c40715ca7ef19cda2dff0a0e57: Status 404 returned error can't find the container with id 981850e98a915412a2f83024cc7c2c969a8052c40715ca7ef19cda2dff0a0e57 Dec 01 03:00:16 crc kubenswrapper[4880]: E1201 03:00:16.433689 4880 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187cf82abb05be5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 03:00:16.432422494 +0000 UTC m=+245.943676866,LastTimestamp:2025-12-01 03:00:16.432422494 +0000 UTC m=+245.943676866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.418882 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a"} Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.419133 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"981850e98a915412a2f83024cc7c2c969a8052c40715ca7ef19cda2dff0a0e57"} Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.420255 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.420855 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.423120 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.650174 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.651074 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.651294 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.789818 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ae59261-9c95-4ea2-bab8-29e4dff81623-kube-api-access\") pod \"4ae59261-9c95-4ea2-bab8-29e4dff81623\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.789881 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-kubelet-dir\") pod \"4ae59261-9c95-4ea2-bab8-29e4dff81623\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.789907 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-var-lock\") pod \"4ae59261-9c95-4ea2-bab8-29e4dff81623\" (UID: \"4ae59261-9c95-4ea2-bab8-29e4dff81623\") " Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.790167 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-var-lock" (OuterVolumeSpecName: "var-lock") pod "4ae59261-9c95-4ea2-bab8-29e4dff81623" (UID: "4ae59261-9c95-4ea2-bab8-29e4dff81623"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.790792 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4ae59261-9c95-4ea2-bab8-29e4dff81623" (UID: "4ae59261-9c95-4ea2-bab8-29e4dff81623"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.798270 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae59261-9c95-4ea2-bab8-29e4dff81623-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4ae59261-9c95-4ea2-bab8-29e4dff81623" (UID: "4ae59261-9c95-4ea2-bab8-29e4dff81623"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.891979 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ae59261-9c95-4ea2-bab8-29e4dff81623-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.892025 4880 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 03:00:17 crc kubenswrapper[4880]: I1201 03:00:17.892057 4880 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4ae59261-9c95-4ea2-bab8-29e4dff81623-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.428843 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4ae59261-9c95-4ea2-bab8-29e4dff81623","Type":"ContainerDied","Data":"72e315fef94357eb4cacf7b9e7b906bf6087a70be1f869939ee0637e1ea0d67a"} Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.429164 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72e315fef94357eb4cacf7b9e7b906bf6087a70be1f869939ee0637e1ea0d67a" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.428898 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.490552 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.491118 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.494378 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.495134 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.495591 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.496003 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.496239 4880 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.614734 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.614833 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.614962 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.614993 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.615019 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.615076 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.615468 4880 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.615512 4880 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.615525 4880 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 03:00:18 crc kubenswrapper[4880]: I1201 03:00:18.793166 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.436684 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.437268 4880 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c" exitCode=0 Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.437312 4880 scope.go:117] "RemoveContainer" containerID="181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.437446 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.438999 4880 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.439482 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.439637 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.440424 4880 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.440714 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.441173 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.457037 4880 scope.go:117] "RemoveContainer" containerID="7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.474011 4880 scope.go:117] "RemoveContainer" containerID="6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.491166 4880 scope.go:117] "RemoveContainer" containerID="1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.505405 4880 scope.go:117] "RemoveContainer" containerID="f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.520112 4880 scope.go:117] "RemoveContainer" containerID="fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.539773 4880 scope.go:117] "RemoveContainer" containerID="181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a" Dec 01 03:00:19 crc kubenswrapper[4880]: E1201 03:00:19.540309 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\": container with ID starting with 181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a not found: ID does not exist" containerID="181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.540348 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a"} err="failed to get container status \"181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\": rpc error: code = NotFound desc = could not find container \"181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a\": container with ID starting with 181ad33ed49c058aa0f5e438992f9a76ecf6cab0ae66eff96972f6c4a30e4b4a not found: ID does not exist" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.540373 4880 scope.go:117] "RemoveContainer" containerID="7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e" Dec 01 03:00:19 crc kubenswrapper[4880]: E1201 03:00:19.540775 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\": container with ID starting with 7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e not found: ID does not exist" containerID="7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.540801 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e"} err="failed to get container status \"7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\": rpc error: code = NotFound desc = could not find container \"7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e\": container with ID starting with 7925d6ad6ca8d8b0875df6a97d5eb343c6aa0139e4bf77743f8e6a02fba9e87e not found: ID does not exist" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.540822 4880 scope.go:117] "RemoveContainer" containerID="6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d" Dec 01 03:00:19 crc kubenswrapper[4880]: E1201 03:00:19.541286 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\": container with ID starting with 6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d not found: ID does not exist" containerID="6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.541301 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d"} err="failed to get container status \"6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\": rpc error: code = NotFound desc = could not find container \"6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d\": container with ID starting with 6e7be1e01542a0d993d2559c493af14a500ae7f57be28a1b8acd9a808bfffc0d not found: ID does not exist" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.541328 4880 scope.go:117] "RemoveContainer" containerID="1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df" Dec 01 03:00:19 crc kubenswrapper[4880]: E1201 03:00:19.541603 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\": container with ID starting with 1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df not found: ID does not exist" containerID="1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.541627 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df"} err="failed to get container status \"1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\": rpc error: code = NotFound desc = could not find container \"1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df\": container with ID starting with 1ba517f50863256037d226e0cf597eb00009e90d69a9c4a7cb6bb99dba6fa5df not found: ID does not exist" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.541642 4880 scope.go:117] "RemoveContainer" containerID="f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c" Dec 01 03:00:19 crc kubenswrapper[4880]: E1201 03:00:19.542136 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\": container with ID starting with f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c not found: ID does not exist" containerID="f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.542163 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c"} err="failed to get container status \"f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\": rpc error: code = NotFound desc = could not find container \"f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c\": container with ID starting with f10a682ffd2504c2b5bf0b675cb74169b17afc8dde5aa9ebfc2d89cc7f2b3f3c not found: ID does not exist" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.542179 4880 scope.go:117] "RemoveContainer" containerID="fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668" Dec 01 03:00:19 crc kubenswrapper[4880]: E1201 03:00:19.542439 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\": container with ID starting with fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668 not found: ID does not exist" containerID="fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668" Dec 01 03:00:19 crc kubenswrapper[4880]: I1201 03:00:19.542462 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668"} err="failed to get container status \"fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\": rpc error: code = NotFound desc = could not find container \"fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668\": container with ID starting with fa5ac095941d29ecd3f23f3f8c137d228c833587348450d833ae921ec37c7668 not found: ID does not exist" Dec 01 03:00:20 crc kubenswrapper[4880]: I1201 03:00:20.788290 4880 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:20 crc kubenswrapper[4880]: I1201 03:00:20.788880 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:20 crc kubenswrapper[4880]: I1201 03:00:20.789140 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:23 crc kubenswrapper[4880]: E1201 03:00:23.181819 4880 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.39:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187cf82abb05be5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 03:00:16.432422494 +0000 UTC m=+245.943676866,LastTimestamp:2025-12-01 03:00:16.432422494 +0000 UTC m=+245.943676866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 03:00:23 crc kubenswrapper[4880]: E1201 03:00:23.480663 4880 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:23 crc kubenswrapper[4880]: E1201 03:00:23.481427 4880 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:23 crc kubenswrapper[4880]: E1201 03:00:23.482128 4880 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:23 crc kubenswrapper[4880]: E1201 03:00:23.482517 4880 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:23 crc kubenswrapper[4880]: E1201 03:00:23.482826 4880 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:23 crc kubenswrapper[4880]: I1201 03:00:23.482850 4880 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 01 03:00:23 crc kubenswrapper[4880]: E1201 03:00:23.483007 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="200ms" Dec 01 03:00:23 crc kubenswrapper[4880]: E1201 03:00:23.683597 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="400ms" Dec 01 03:00:24 crc kubenswrapper[4880]: E1201 03:00:24.084780 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="800ms" Dec 01 03:00:24 crc kubenswrapper[4880]: E1201 03:00:24.886662 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="1.6s" Dec 01 03:00:26 crc kubenswrapper[4880]: E1201 03:00:26.488357 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="3.2s" Dec 01 03:00:29 crc kubenswrapper[4880]: E1201 03:00:29.690750 4880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.39:6443: connect: connection refused" interval="6.4s" Dec 01 03:00:29 crc kubenswrapper[4880]: I1201 03:00:29.783618 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:29 crc kubenswrapper[4880]: I1201 03:00:29.784507 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:29 crc kubenswrapper[4880]: I1201 03:00:29.784951 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:29 crc kubenswrapper[4880]: I1201 03:00:29.813378 4880 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:29 crc kubenswrapper[4880]: I1201 03:00:29.813410 4880 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:29 crc kubenswrapper[4880]: E1201 03:00:29.813798 4880 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:29 crc kubenswrapper[4880]: I1201 03:00:29.815008 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:29 crc kubenswrapper[4880]: W1201 03:00:29.844237 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d3b4cdce0d951324deb9a0235235a48d1610cfafdc4bcfce6c01ae14fa45fe21 WatchSource:0}: Error finding container d3b4cdce0d951324deb9a0235235a48d1610cfafdc4bcfce6c01ae14fa45fe21: Status 404 returned error can't find the container with id d3b4cdce0d951324deb9a0235235a48d1610cfafdc4bcfce6c01ae14fa45fe21 Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.501795 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.502246 4880 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1" exitCode=1 Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.502349 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1"} Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.503016 4880 scope.go:117] "RemoveContainer" containerID="d7e07f5aba2f9df243f05999bd35128f14152fa821f93eff1a55e8a6d46716a1" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.504340 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.504686 4880 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.505151 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.506900 4880 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="26ac9b877b6ced381b6ef255f3cbcdd8216ac2c5e35a49772b5eef005d141a71" exitCode=0 Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.506916 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"26ac9b877b6ced381b6ef255f3cbcdd8216ac2c5e35a49772b5eef005d141a71"} Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.507471 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d3b4cdce0d951324deb9a0235235a48d1610cfafdc4bcfce6c01ae14fa45fe21"} Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.508564 4880 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.508959 4880 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.508994 4880 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.509671 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:30 crc kubenswrapper[4880]: E1201 03:00:30.509763 4880 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.510172 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.792469 4880 status_manager.go:851] "Failed to get status for pod" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.793365 4880 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.793682 4880 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:30 crc kubenswrapper[4880]: I1201 03:00:30.793979 4880 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.39:6443: connect: connection refused" Dec 01 03:00:31 crc kubenswrapper[4880]: I1201 03:00:31.515893 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 03:00:31 crc kubenswrapper[4880]: I1201 03:00:31.515968 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28fe6b0cfa4eea520882585eb5b475ecfa9a44ba0bdbcc629fb3fdbf4b97c6ee"} Dec 01 03:00:31 crc kubenswrapper[4880]: I1201 03:00:31.519175 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cf123bcdf5bb6b0e7fcdca855f30d02be3f5805b81c46e6635ac25bae21535f3"} Dec 01 03:00:31 crc kubenswrapper[4880]: I1201 03:00:31.519214 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e095e99e98d1773673f592f6e2664216cb9bac7cbf58aea76120f0eb47ca0f60"} Dec 01 03:00:31 crc kubenswrapper[4880]: I1201 03:00:31.519223 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d18c194da04c24be86e6619ff95aa25677d9514a1801186bc93431d3a30a43c3"} Dec 01 03:00:32 crc kubenswrapper[4880]: I1201 03:00:32.526939 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c8843444ca68f1e5079acf038d2f8b00e793fbe63a6b562dfd82aa00ded29a8b"} Dec 01 03:00:32 crc kubenswrapper[4880]: I1201 03:00:32.527205 4880 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:32 crc kubenswrapper[4880]: I1201 03:00:32.527225 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:32 crc kubenswrapper[4880]: I1201 03:00:32.527230 4880 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:32 crc kubenswrapper[4880]: I1201 03:00:32.527236 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5ef74bd299072e175177934c395627b7406bc6bdd2e033e6c163bbab8676ae16"} Dec 01 03:00:33 crc kubenswrapper[4880]: I1201 03:00:33.223111 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 03:00:33 crc kubenswrapper[4880]: I1201 03:00:33.245382 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 03:00:33 crc kubenswrapper[4880]: I1201 03:00:33.532351 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 03:00:34 crc kubenswrapper[4880]: I1201 03:00:34.815787 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:34 crc kubenswrapper[4880]: I1201 03:00:34.817948 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:34 crc kubenswrapper[4880]: I1201 03:00:34.822801 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:37 crc kubenswrapper[4880]: I1201 03:00:37.565988 4880 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:38 crc kubenswrapper[4880]: I1201 03:00:38.562082 4880 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:38 crc kubenswrapper[4880]: I1201 03:00:38.562432 4880 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:38 crc kubenswrapper[4880]: I1201 03:00:38.569544 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:38 crc kubenswrapper[4880]: I1201 03:00:38.572159 4880 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="50d4e3fd-cce9-4289-9061-9632432836e4" Dec 01 03:00:39 crc kubenswrapper[4880]: I1201 03:00:39.585128 4880 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:39 crc kubenswrapper[4880]: I1201 03:00:39.585511 4880 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a9f5ae8a-5dc0-4aa4-9068-c41c0f18d9d1" Dec 01 03:00:40 crc kubenswrapper[4880]: I1201 03:00:40.807556 4880 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="50d4e3fd-cce9-4289-9061-9632432836e4" Dec 01 03:00:46 crc kubenswrapper[4880]: I1201 03:00:46.870651 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 03:00:47 crc kubenswrapper[4880]: I1201 03:00:47.018936 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 03:00:47 crc kubenswrapper[4880]: I1201 03:00:47.195771 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 03:00:47 crc kubenswrapper[4880]: I1201 03:00:47.935496 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 03:00:48 crc kubenswrapper[4880]: I1201 03:00:48.014126 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 03:00:48 crc kubenswrapper[4880]: I1201 03:00:48.706749 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.171508 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.218605 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.314348 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.339388 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.429573 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.532252 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.548018 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.600334 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.923932 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 03:00:49 crc kubenswrapper[4880]: I1201 03:00:49.926624 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.120463 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.165555 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.213756 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.246900 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.280445 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.289205 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.322429 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.322584 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.345981 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.381236 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.408613 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.512290 4880 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.566489 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.638868 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.732623 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.909542 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 03:00:50 crc kubenswrapper[4880]: I1201 03:00:50.965333 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.016275 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.027791 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.098768 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.154378 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.156709 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.217965 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.279830 4880 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.337875 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.340243 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.353093 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.394598 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.413426 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.417677 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.443321 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.493144 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.532971 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.566432 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.636495 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.655768 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.737964 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.851646 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.867903 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.908232 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.915436 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.952245 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 03:00:51 crc kubenswrapper[4880]: I1201 03:00:51.991273 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.069599 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.175951 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.230849 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.234439 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.235143 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.248820 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.249350 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.311794 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.416323 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.421816 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.517094 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.581103 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.661200 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.703341 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.707452 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.741958 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.762741 4880 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.766053 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.766035147 podStartE2EDuration="36.766035147s" podCreationTimestamp="2025-12-01 03:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:00:37.343477456 +0000 UTC m=+266.854731838" watchObservedRunningTime="2025-12-01 03:00:52.766035147 +0000 UTC m=+282.277289519" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.767016 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.767052 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.771464 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.804911 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.804878658 podStartE2EDuration="15.804878658s" podCreationTimestamp="2025-12-01 03:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:00:52.804117406 +0000 UTC m=+282.315371778" watchObservedRunningTime="2025-12-01 03:00:52.804878658 +0000 UTC m=+282.316133030" Dec 01 03:00:52 crc kubenswrapper[4880]: I1201 03:00:52.971340 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.020785 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.035066 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.092479 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.178583 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.192405 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.226400 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.314340 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.318823 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.373942 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.394005 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.416196 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.441156 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.455457 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.462951 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.470001 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.482318 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.600058 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.635645 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.651540 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.862263 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.881123 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.908505 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.929755 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.956140 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 03:00:53 crc kubenswrapper[4880]: I1201 03:00:53.984443 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.022957 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.199961 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.206139 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.229175 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.238493 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.241944 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.475607 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.491678 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.623469 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.669457 4880 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.715439 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.785760 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.817527 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.825759 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.860918 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.930698 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 03:00:54 crc kubenswrapper[4880]: I1201 03:00:54.979993 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.011158 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.037842 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.094701 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.104536 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.127665 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.166324 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.275333 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.284687 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.408579 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.432375 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.434131 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.578835 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.587073 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.590950 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.601850 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.648164 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.665640 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.688387 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.699339 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.898420 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.938865 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.944562 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.954629 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 03:00:55 crc kubenswrapper[4880]: I1201 03:00:55.984104 4880 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.026035 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.045319 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.158000 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.180755 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.236665 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.266187 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.427846 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.466034 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.505665 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.553182 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.575553 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.638701 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.658896 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.670474 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.788930 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.810519 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 03:00:56 crc kubenswrapper[4880]: I1201 03:00:56.941291 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.005010 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.011014 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.025336 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.086792 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.142580 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.147515 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.278756 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.302410 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.432915 4880 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.495492 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.547943 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.550827 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.672580 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.784684 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.854183 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.923348 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 03:00:57 crc kubenswrapper[4880]: I1201 03:00:57.993247 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.138685 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.147755 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.167553 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.339800 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.391101 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.444632 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.469436 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.470623 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.588591 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.618954 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.739823 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.785582 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.806012 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.810252 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.954232 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 03:00:58 crc kubenswrapper[4880]: I1201 03:00:58.985246 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.050576 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.184154 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.273461 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.276431 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.304026 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.417057 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.453588 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.473349 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.531465 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.628014 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.667672 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.783795 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.791820 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.805980 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.841892 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.907547 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 03:00:59 crc kubenswrapper[4880]: I1201 03:00:59.925009 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.011799 4880 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.012022 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a" gracePeriod=5 Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.048363 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.068919 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.102683 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.119535 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.145590 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.162219 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.231149 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.404804 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.597012 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.628750 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 03:01:00 crc kubenswrapper[4880]: I1201 03:01:00.859718 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.002800 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.060714 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.087810 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.186197 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.229920 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.342484 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.376422 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.388440 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.400848 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.418677 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.510219 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65qhf"] Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.510497 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-65qhf" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerName="registry-server" containerID="cri-o://fb5ca190f3da9b5a89bcf7b660cc6cf298ef2e0facc4216f82f45c523b7d8de3" gracePeriod=30 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.515409 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8wd4"] Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.515670 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j8wd4" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerName="registry-server" containerID="cri-o://34e67f1da4fcd2941fae6824f59ba6604f5ddb1fd213e09a43a959563be8a2eb" gracePeriod=30 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.518885 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.520517 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.523243 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.529977 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lf8k7"] Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.530154 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" podUID="fd0d64d0-7952-425c-95d5-5180ed5f588c" containerName="marketplace-operator" containerID="cri-o://2ff6338ce89514589b92b64ac3a7b3c79448a8091f3063824b2badd303036142" gracePeriod=30 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.538729 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rstnz"] Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.539001 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rstnz" podUID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerName="registry-server" containerID="cri-o://c1dd4528ddd5ea4cc92e76ea58973db3113ee94b77886047a8956eb6dbd6479c" gracePeriod=30 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.552439 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.554331 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8d88"] Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.554568 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q8d88" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerName="registry-server" containerID="cri-o://d54b7e9c5d0e6ea15b73349b8bc2575bcad7414a08e309a6ab318d413e3309de" gracePeriod=30 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.586487 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdvkw"] Dec 01 03:01:01 crc kubenswrapper[4880]: E1201 03:01:01.586769 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" containerName="installer" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.586783 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" containerName="installer" Dec 01 03:01:01 crc kubenswrapper[4880]: E1201 03:01:01.586794 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.586801 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.586947 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.586974 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae59261-9c95-4ea2-bab8-29e4dff81623" containerName="installer" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.587416 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.595261 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.601471 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdvkw"] Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.723226 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.747299 4880 generic.go:334] "Generic (PLEG): container finished" podID="fd0d64d0-7952-425c-95d5-5180ed5f588c" containerID="2ff6338ce89514589b92b64ac3a7b3c79448a8091f3063824b2badd303036142" exitCode=0 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.747406 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" event={"ID":"fd0d64d0-7952-425c-95d5-5180ed5f588c","Type":"ContainerDied","Data":"2ff6338ce89514589b92b64ac3a7b3c79448a8091f3063824b2badd303036142"} Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.762046 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ece21e3a-e8a6-438c-8a18-96a379963517-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wdvkw\" (UID: \"ece21e3a-e8a6-438c-8a18-96a379963517\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.762081 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ece21e3a-e8a6-438c-8a18-96a379963517-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wdvkw\" (UID: \"ece21e3a-e8a6-438c-8a18-96a379963517\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.762262 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2n9q\" (UniqueName: \"kubernetes.io/projected/ece21e3a-e8a6-438c-8a18-96a379963517-kube-api-access-m2n9q\") pod \"marketplace-operator-79b997595-wdvkw\" (UID: \"ece21e3a-e8a6-438c-8a18-96a379963517\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.762975 4880 generic.go:334] "Generic (PLEG): container finished" podID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerID="fb5ca190f3da9b5a89bcf7b660cc6cf298ef2e0facc4216f82f45c523b7d8de3" exitCode=0 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.763066 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qhf" event={"ID":"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4","Type":"ContainerDied","Data":"fb5ca190f3da9b5a89bcf7b660cc6cf298ef2e0facc4216f82f45c523b7d8de3"} Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.770548 4880 generic.go:334] "Generic (PLEG): container finished" podID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerID="34e67f1da4fcd2941fae6824f59ba6604f5ddb1fd213e09a43a959563be8a2eb" exitCode=0 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.770645 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8wd4" event={"ID":"d076e21e-9946-4ee4-9953-7c0a3830c0fc","Type":"ContainerDied","Data":"34e67f1da4fcd2941fae6824f59ba6604f5ddb1fd213e09a43a959563be8a2eb"} Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.775434 4880 generic.go:334] "Generic (PLEG): container finished" podID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerID="d54b7e9c5d0e6ea15b73349b8bc2575bcad7414a08e309a6ab318d413e3309de" exitCode=0 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.775533 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8d88" event={"ID":"55a2d619-2948-4157-a5ab-a2e2a9247cc2","Type":"ContainerDied","Data":"d54b7e9c5d0e6ea15b73349b8bc2575bcad7414a08e309a6ab318d413e3309de"} Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.780029 4880 generic.go:334] "Generic (PLEG): container finished" podID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerID="c1dd4528ddd5ea4cc92e76ea58973db3113ee94b77886047a8956eb6dbd6479c" exitCode=0 Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.780075 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rstnz" event={"ID":"07ddb3cc-a464-4645-8b0c-7475a9b75330","Type":"ContainerDied","Data":"c1dd4528ddd5ea4cc92e76ea58973db3113ee94b77886047a8956eb6dbd6479c"} Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.864383 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2n9q\" (UniqueName: \"kubernetes.io/projected/ece21e3a-e8a6-438c-8a18-96a379963517-kube-api-access-m2n9q\") pod \"marketplace-operator-79b997595-wdvkw\" (UID: \"ece21e3a-e8a6-438c-8a18-96a379963517\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.864447 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ece21e3a-e8a6-438c-8a18-96a379963517-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wdvkw\" (UID: \"ece21e3a-e8a6-438c-8a18-96a379963517\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.864465 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ece21e3a-e8a6-438c-8a18-96a379963517-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wdvkw\" (UID: \"ece21e3a-e8a6-438c-8a18-96a379963517\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.868993 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ece21e3a-e8a6-438c-8a18-96a379963517-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wdvkw\" (UID: \"ece21e3a-e8a6-438c-8a18-96a379963517\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.873101 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ece21e3a-e8a6-438c-8a18-96a379963517-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wdvkw\" (UID: \"ece21e3a-e8a6-438c-8a18-96a379963517\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.889222 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65qhf" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.896983 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2n9q\" (UniqueName: \"kubernetes.io/projected/ece21e3a-e8a6-438c-8a18-96a379963517-kube-api-access-m2n9q\") pod \"marketplace-operator-79b997595-wdvkw\" (UID: \"ece21e3a-e8a6-438c-8a18-96a379963517\") " pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:01 crc kubenswrapper[4880]: I1201 03:01:01.966048 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.005040 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.030776 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8wd4" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.067901 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbjwn\" (UniqueName: \"kubernetes.io/projected/d076e21e-9946-4ee4-9953-7c0a3830c0fc-kube-api-access-nbjwn\") pod \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.067940 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-utilities\") pod \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.067963 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-catalog-content\") pod \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.067981 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-trusted-ca\") pod \"fd0d64d0-7952-425c-95d5-5180ed5f588c\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.068009 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-catalog-content\") pod \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.068031 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lhsb\" (UniqueName: \"kubernetes.io/projected/fd0d64d0-7952-425c-95d5-5180ed5f588c-kube-api-access-2lhsb\") pod \"fd0d64d0-7952-425c-95d5-5180ed5f588c\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.068051 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-utilities\") pod \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\" (UID: \"d076e21e-9946-4ee4-9953-7c0a3830c0fc\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.068066 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-operator-metrics\") pod \"fd0d64d0-7952-425c-95d5-5180ed5f588c\" (UID: \"fd0d64d0-7952-425c-95d5-5180ed5f588c\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.068090 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s46r\" (UniqueName: \"kubernetes.io/projected/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-kube-api-access-4s46r\") pod \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\" (UID: \"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.084500 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fd0d64d0-7952-425c-95d5-5180ed5f588c" (UID: "fd0d64d0-7952-425c-95d5-5180ed5f588c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.085242 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-utilities" (OuterVolumeSpecName: "utilities") pod "d076e21e-9946-4ee4-9953-7c0a3830c0fc" (UID: "d076e21e-9946-4ee4-9953-7c0a3830c0fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.086245 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0d64d0-7952-425c-95d5-5180ed5f588c-kube-api-access-2lhsb" (OuterVolumeSpecName: "kube-api-access-2lhsb") pod "fd0d64d0-7952-425c-95d5-5180ed5f588c" (UID: "fd0d64d0-7952-425c-95d5-5180ed5f588c"). InnerVolumeSpecName "kube-api-access-2lhsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.089059 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-utilities" (OuterVolumeSpecName: "utilities") pod "b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" (UID: "b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.090343 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d076e21e-9946-4ee4-9953-7c0a3830c0fc-kube-api-access-nbjwn" (OuterVolumeSpecName: "kube-api-access-nbjwn") pod "d076e21e-9946-4ee4-9953-7c0a3830c0fc" (UID: "d076e21e-9946-4ee4-9953-7c0a3830c0fc"). InnerVolumeSpecName "kube-api-access-nbjwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.093406 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fd0d64d0-7952-425c-95d5-5180ed5f588c" (UID: "fd0d64d0-7952-425c-95d5-5180ed5f588c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.093426 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.098209 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-kube-api-access-4s46r" (OuterVolumeSpecName: "kube-api-access-4s46r") pod "b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" (UID: "b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4"). InnerVolumeSpecName "kube-api-access-4s46r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.103111 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.155785 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" (UID: "b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.159431 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d076e21e-9946-4ee4-9953-7c0a3830c0fc" (UID: "d076e21e-9946-4ee4-9953-7c0a3830c0fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.169232 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbjwn\" (UniqueName: \"kubernetes.io/projected/d076e21e-9946-4ee4-9953-7c0a3830c0fc-kube-api-access-nbjwn\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.169335 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.169410 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.169468 4880 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.169528 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.169596 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lhsb\" (UniqueName: \"kubernetes.io/projected/fd0d64d0-7952-425c-95d5-5180ed5f588c-kube-api-access-2lhsb\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.169658 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d076e21e-9946-4ee4-9953-7c0a3830c0fc-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.169724 4880 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd0d64d0-7952-425c-95d5-5180ed5f588c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.170149 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s46r\" (UniqueName: \"kubernetes.io/projected/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4-kube-api-access-4s46r\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.177110 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.235209 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.271481 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-catalog-content\") pod \"07ddb3cc-a464-4645-8b0c-7475a9b75330\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.271709 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-catalog-content\") pod \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.271737 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfq7x\" (UniqueName: \"kubernetes.io/projected/07ddb3cc-a464-4645-8b0c-7475a9b75330-kube-api-access-cfq7x\") pod \"07ddb3cc-a464-4645-8b0c-7475a9b75330\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.271780 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-utilities\") pod \"07ddb3cc-a464-4645-8b0c-7475a9b75330\" (UID: \"07ddb3cc-a464-4645-8b0c-7475a9b75330\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.271800 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-utilities\") pod \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.271886 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbdrp\" (UniqueName: \"kubernetes.io/projected/55a2d619-2948-4157-a5ab-a2e2a9247cc2-kube-api-access-hbdrp\") pod \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\" (UID: \"55a2d619-2948-4157-a5ab-a2e2a9247cc2\") " Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.283170 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a2d619-2948-4157-a5ab-a2e2a9247cc2-kube-api-access-hbdrp" (OuterVolumeSpecName: "kube-api-access-hbdrp") pod "55a2d619-2948-4157-a5ab-a2e2a9247cc2" (UID: "55a2d619-2948-4157-a5ab-a2e2a9247cc2"). InnerVolumeSpecName "kube-api-access-hbdrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.283852 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-utilities" (OuterVolumeSpecName: "utilities") pod "55a2d619-2948-4157-a5ab-a2e2a9247cc2" (UID: "55a2d619-2948-4157-a5ab-a2e2a9247cc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.289992 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ddb3cc-a464-4645-8b0c-7475a9b75330-kube-api-access-cfq7x" (OuterVolumeSpecName: "kube-api-access-cfq7x") pod "07ddb3cc-a464-4645-8b0c-7475a9b75330" (UID: "07ddb3cc-a464-4645-8b0c-7475a9b75330"). InnerVolumeSpecName "kube-api-access-cfq7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.290113 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-utilities" (OuterVolumeSpecName: "utilities") pod "07ddb3cc-a464-4645-8b0c-7475a9b75330" (UID: "07ddb3cc-a464-4645-8b0c-7475a9b75330"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.307433 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07ddb3cc-a464-4645-8b0c-7475a9b75330" (UID: "07ddb3cc-a464-4645-8b0c-7475a9b75330"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.365145 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.374287 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbdrp\" (UniqueName: \"kubernetes.io/projected/55a2d619-2948-4157-a5ab-a2e2a9247cc2-kube-api-access-hbdrp\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.374313 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.374322 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfq7x\" (UniqueName: \"kubernetes.io/projected/07ddb3cc-a464-4645-8b0c-7475a9b75330-kube-api-access-cfq7x\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.374330 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07ddb3cc-a464-4645-8b0c-7475a9b75330-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.374338 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.390601 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wdvkw"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.399986 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55a2d619-2948-4157-a5ab-a2e2a9247cc2" (UID: "55a2d619-2948-4157-a5ab-a2e2a9247cc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.465342 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.477932 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2d619-2948-4157-a5ab-a2e2a9247cc2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.663477 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.713842 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.785736 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8d88" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.788328 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rstnz" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.789953 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65qhf" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.791021 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.795627 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8wd4" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.815623 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" podStartSLOduration=1.815605495 podStartE2EDuration="1.815605495s" podCreationTimestamp="2025-12-01 03:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:01:02.811808625 +0000 UTC m=+292.323063017" watchObservedRunningTime="2025-12-01 03:01:02.815605495 +0000 UTC m=+292.326859857" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828641 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8d88" event={"ID":"55a2d619-2948-4157-a5ab-a2e2a9247cc2","Type":"ContainerDied","Data":"a0fac64fef673b6a922de2a61dbf0ed2ff66da4ed5791b90f63d191e90490bdc"} Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828692 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" event={"ID":"ece21e3a-e8a6-438c-8a18-96a379963517","Type":"ContainerStarted","Data":"951a87a55f4c34fb5314708ef1ec1c7c8711db78865cc395f87efa123d7149c6"} Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828708 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" event={"ID":"ece21e3a-e8a6-438c-8a18-96a379963517","Type":"ContainerStarted","Data":"a210c4538c20fe8e1b63d37a537464525912788d748612672f8e45ff95702cae"} Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828728 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828743 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rstnz" event={"ID":"07ddb3cc-a464-4645-8b0c-7475a9b75330","Type":"ContainerDied","Data":"5ce6e92e65c3c18b8ce0069d17049b121bfedef66e8b5f3d370e787584d945e2"} Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828758 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65qhf" event={"ID":"b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4","Type":"ContainerDied","Data":"ba3167329db5c782fc257a625a173ac766e81d0f9a3f93a821c5214c9a85a807"} Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828773 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lf8k7" event={"ID":"fd0d64d0-7952-425c-95d5-5180ed5f588c","Type":"ContainerDied","Data":"86725aed7757c303017d2cfcadbe4f852979dd894d67125c6980ebff820dc36f"} Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828813 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wdvkw" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828825 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8wd4" event={"ID":"d076e21e-9946-4ee4-9953-7c0a3830c0fc","Type":"ContainerDied","Data":"6de45d2ab0dadde6cddbec4f15bc2b87456246c2f5e5a4b610bd95e177ec711a"} Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.828984 4880 scope.go:117] "RemoveContainer" containerID="d54b7e9c5d0e6ea15b73349b8bc2575bcad7414a08e309a6ab318d413e3309de" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.856330 4880 scope.go:117] "RemoveContainer" containerID="75eed8ffa6cce1862bccda3657f7d740dba9af8111522a1b03a6f0fc6c13538d" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.879506 4880 scope.go:117] "RemoveContainer" containerID="7e90ed69aa8f07e79a5e74f5b1c354eb149e385f2f3ea57ae866b0fabf6dfdf3" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.882952 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.899673 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8wd4"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.915626 4880 scope.go:117] "RemoveContainer" containerID="c1dd4528ddd5ea4cc92e76ea58973db3113ee94b77886047a8956eb6dbd6479c" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.928663 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j8wd4"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.932419 4880 scope.go:117] "RemoveContainer" containerID="b58ef27910f83c3030c5447e18713be060ae3e5b5169fbd36df7a71d14384b34" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.935651 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65qhf"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.950572 4880 scope.go:117] "RemoveContainer" containerID="38010a4b04df2fcd5c8370db8f4255ae71135639b4c95b3d3f46c6b73da2e766" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.955476 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-65qhf"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.960007 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8d88"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.963722 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q8d88"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.965595 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lf8k7"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.968126 4880 scope.go:117] "RemoveContainer" containerID="fb5ca190f3da9b5a89bcf7b660cc6cf298ef2e0facc4216f82f45c523b7d8de3" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.969068 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lf8k7"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.971988 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rstnz"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.975900 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rstnz"] Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.994945 4880 scope.go:117] "RemoveContainer" containerID="04add8a6af2bfc9f474692432e226ac99e1b45b703e0ebe443c69fc6a99b8bd6" Dec 01 03:01:02 crc kubenswrapper[4880]: I1201 03:01:02.995655 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.047911 4880 scope.go:117] "RemoveContainer" containerID="444dfb8e1a6cbdb934467c7ade48718463639e238655a9727687b93798b62c58" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.063071 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.068149 4880 scope.go:117] "RemoveContainer" containerID="2ff6338ce89514589b92b64ac3a7b3c79448a8091f3063824b2badd303036142" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.102010 4880 scope.go:117] "RemoveContainer" containerID="34e67f1da4fcd2941fae6824f59ba6604f5ddb1fd213e09a43a959563be8a2eb" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.130775 4880 scope.go:117] "RemoveContainer" containerID="e8057399eab846d599ad55383b8455aaece2ecd0ebe135239dd7aa022924dd22" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.143095 4880 scope.go:117] "RemoveContainer" containerID="beae3fd649ff9a666fa737987a228725fc7962b9ed86f7eb8f4286d12eadba08" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.239546 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.266141 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.292190 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.342720 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.345737 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 03:01:03 crc kubenswrapper[4880]: I1201 03:01:03.847760 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 03:01:04 crc kubenswrapper[4880]: I1201 03:01:04.069926 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 03:01:04 crc kubenswrapper[4880]: I1201 03:01:04.797491 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ddb3cc-a464-4645-8b0c-7475a9b75330" path="/var/lib/kubelet/pods/07ddb3cc-a464-4645-8b0c-7475a9b75330/volumes" Dec 01 03:01:04 crc kubenswrapper[4880]: I1201 03:01:04.799197 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" path="/var/lib/kubelet/pods/55a2d619-2948-4157-a5ab-a2e2a9247cc2/volumes" Dec 01 03:01:04 crc kubenswrapper[4880]: I1201 03:01:04.800812 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" path="/var/lib/kubelet/pods/b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4/volumes" Dec 01 03:01:04 crc kubenswrapper[4880]: I1201 03:01:04.803084 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" path="/var/lib/kubelet/pods/d076e21e-9946-4ee4-9953-7c0a3830c0fc/volumes" Dec 01 03:01:04 crc kubenswrapper[4880]: I1201 03:01:04.804477 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd0d64d0-7952-425c-95d5-5180ed5f588c" path="/var/lib/kubelet/pods/fd0d64d0-7952-425c-95d5-5180ed5f588c/volumes" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.603937 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.604009 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.721659 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.721746 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.721833 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.721961 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.721867 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.721806 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.722067 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.721973 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.722165 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.722556 4880 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.722582 4880 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.722598 4880 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.722642 4880 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.732266 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.823618 4880 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.828539 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.828580 4880 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a" exitCode=137 Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.828638 4880 scope.go:117] "RemoveContainer" containerID="b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.828746 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.859579 4880 scope.go:117] "RemoveContainer" containerID="b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a" Dec 01 03:01:05 crc kubenswrapper[4880]: E1201 03:01:05.860218 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a\": container with ID starting with b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a not found: ID does not exist" containerID="b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a" Dec 01 03:01:05 crc kubenswrapper[4880]: I1201 03:01:05.860265 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a"} err="failed to get container status \"b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a\": rpc error: code = NotFound desc = could not find container \"b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a\": container with ID starting with b878ccce85805c7be703bae78f293474e7eafc706a97cbb4f4594f67cd08457a not found: ID does not exist" Dec 01 03:01:06 crc kubenswrapper[4880]: I1201 03:01:06.790235 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 01 03:01:06 crc kubenswrapper[4880]: I1201 03:01:06.790451 4880 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 01 03:01:06 crc kubenswrapper[4880]: I1201 03:01:06.804916 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 03:01:06 crc kubenswrapper[4880]: I1201 03:01:06.804946 4880 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="17a4c2e6-71bd-4b66-94f3-eedf3cc2dcda" Dec 01 03:01:06 crc kubenswrapper[4880]: I1201 03:01:06.807396 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 03:01:06 crc kubenswrapper[4880]: I1201 03:01:06.807433 4880 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="17a4c2e6-71bd-4b66-94f3-eedf3cc2dcda" Dec 01 03:01:14 crc kubenswrapper[4880]: E1201 03:01:14.397693 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Dec 01 03:01:22 crc kubenswrapper[4880]: I1201 03:01:22.522075 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mxnkp"] Dec 01 03:01:22 crc kubenswrapper[4880]: I1201 03:01:22.522857 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" podUID="183bf44d-c621-4f91-8ddc-10093cfc2596" containerName="controller-manager" containerID="cri-o://de9e0d5b52caf3a2fbf0feface417552680ba41db0b06aa98b13d3f0a6011811" gracePeriod=30 Dec 01 03:01:22 crc kubenswrapper[4880]: I1201 03:01:22.623118 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k"] Dec 01 03:01:22 crc kubenswrapper[4880]: I1201 03:01:22.623328 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" podUID="14fe6389-fa84-4ec6-8891-0a379e0d4f29" containerName="route-controller-manager" containerID="cri-o://33367c9df740fbcc0ccc194f2270ea2f7f8fcfec3e6cb68909d57cece4094078" gracePeriod=30 Dec 01 03:01:22 crc kubenswrapper[4880]: I1201 03:01:22.940127 4880 generic.go:334] "Generic (PLEG): container finished" podID="14fe6389-fa84-4ec6-8891-0a379e0d4f29" containerID="33367c9df740fbcc0ccc194f2270ea2f7f8fcfec3e6cb68909d57cece4094078" exitCode=0 Dec 01 03:01:22 crc kubenswrapper[4880]: I1201 03:01:22.940229 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" event={"ID":"14fe6389-fa84-4ec6-8891-0a379e0d4f29","Type":"ContainerDied","Data":"33367c9df740fbcc0ccc194f2270ea2f7f8fcfec3e6cb68909d57cece4094078"} Dec 01 03:01:22 crc kubenswrapper[4880]: I1201 03:01:22.942299 4880 generic.go:334] "Generic (PLEG): container finished" podID="183bf44d-c621-4f91-8ddc-10093cfc2596" containerID="de9e0d5b52caf3a2fbf0feface417552680ba41db0b06aa98b13d3f0a6011811" exitCode=0 Dec 01 03:01:22 crc kubenswrapper[4880]: I1201 03:01:22.942351 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" event={"ID":"183bf44d-c621-4f91-8ddc-10093cfc2596","Type":"ContainerDied","Data":"de9e0d5b52caf3a2fbf0feface417552680ba41db0b06aa98b13d3f0a6011811"} Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.057618 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.166272 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/183bf44d-c621-4f91-8ddc-10093cfc2596-serving-cert\") pod \"183bf44d-c621-4f91-8ddc-10093cfc2596\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.166342 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-config\") pod \"183bf44d-c621-4f91-8ddc-10093cfc2596\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.166375 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-client-ca\") pod \"183bf44d-c621-4f91-8ddc-10093cfc2596\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.166408 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-proxy-ca-bundles\") pod \"183bf44d-c621-4f91-8ddc-10093cfc2596\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.166473 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8m6j\" (UniqueName: \"kubernetes.io/projected/183bf44d-c621-4f91-8ddc-10093cfc2596-kube-api-access-q8m6j\") pod \"183bf44d-c621-4f91-8ddc-10093cfc2596\" (UID: \"183bf44d-c621-4f91-8ddc-10093cfc2596\") " Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.167392 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-client-ca" (OuterVolumeSpecName: "client-ca") pod "183bf44d-c621-4f91-8ddc-10093cfc2596" (UID: "183bf44d-c621-4f91-8ddc-10093cfc2596"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.167435 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "183bf44d-c621-4f91-8ddc-10093cfc2596" (UID: "183bf44d-c621-4f91-8ddc-10093cfc2596"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.167462 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-config" (OuterVolumeSpecName: "config") pod "183bf44d-c621-4f91-8ddc-10093cfc2596" (UID: "183bf44d-c621-4f91-8ddc-10093cfc2596"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.177258 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183bf44d-c621-4f91-8ddc-10093cfc2596-kube-api-access-q8m6j" (OuterVolumeSpecName: "kube-api-access-q8m6j") pod "183bf44d-c621-4f91-8ddc-10093cfc2596" (UID: "183bf44d-c621-4f91-8ddc-10093cfc2596"). InnerVolumeSpecName "kube-api-access-q8m6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.178708 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183bf44d-c621-4f91-8ddc-10093cfc2596-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "183bf44d-c621-4f91-8ddc-10093cfc2596" (UID: "183bf44d-c621-4f91-8ddc-10093cfc2596"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.207026 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.269589 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8m6j\" (UniqueName: \"kubernetes.io/projected/183bf44d-c621-4f91-8ddc-10093cfc2596-kube-api-access-q8m6j\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.269637 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/183bf44d-c621-4f91-8ddc-10093cfc2596-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.269653 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.269667 4880 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.269681 4880 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/183bf44d-c621-4f91-8ddc-10093cfc2596-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.370063 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fe6389-fa84-4ec6-8891-0a379e0d4f29-serving-cert\") pod \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.370104 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-config\") pod \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.370184 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6df9r\" (UniqueName: \"kubernetes.io/projected/14fe6389-fa84-4ec6-8891-0a379e0d4f29-kube-api-access-6df9r\") pod \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.370228 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-client-ca\") pod \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\" (UID: \"14fe6389-fa84-4ec6-8891-0a379e0d4f29\") " Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.370631 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-config" (OuterVolumeSpecName: "config") pod "14fe6389-fa84-4ec6-8891-0a379e0d4f29" (UID: "14fe6389-fa84-4ec6-8891-0a379e0d4f29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.370803 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-client-ca" (OuterVolumeSpecName: "client-ca") pod "14fe6389-fa84-4ec6-8891-0a379e0d4f29" (UID: "14fe6389-fa84-4ec6-8891-0a379e0d4f29"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.373306 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fe6389-fa84-4ec6-8891-0a379e0d4f29-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14fe6389-fa84-4ec6-8891-0a379e0d4f29" (UID: "14fe6389-fa84-4ec6-8891-0a379e0d4f29"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.376199 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fe6389-fa84-4ec6-8891-0a379e0d4f29-kube-api-access-6df9r" (OuterVolumeSpecName: "kube-api-access-6df9r") pod "14fe6389-fa84-4ec6-8891-0a379e0d4f29" (UID: "14fe6389-fa84-4ec6-8891-0a379e0d4f29"). InnerVolumeSpecName "kube-api-access-6df9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.471290 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6df9r\" (UniqueName: \"kubernetes.io/projected/14fe6389-fa84-4ec6-8891-0a379e0d4f29-kube-api-access-6df9r\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.471342 4880 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.471360 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14fe6389-fa84-4ec6-8891-0a379e0d4f29-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.471379 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe6389-fa84-4ec6-8891-0a379e0d4f29-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.949300 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" event={"ID":"14fe6389-fa84-4ec6-8891-0a379e0d4f29","Type":"ContainerDied","Data":"23c26be5f123e5387e0cdfc0ef757de41c08d7543779bdeadcb8f7acd7c16b0e"} Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.949362 4880 scope.go:117] "RemoveContainer" containerID="33367c9df740fbcc0ccc194f2270ea2f7f8fcfec3e6cb68909d57cece4094078" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.949390 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.951392 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" event={"ID":"183bf44d-c621-4f91-8ddc-10093cfc2596","Type":"ContainerDied","Data":"ecc4a663b506eb4383c591ccb7e5ca8e973bc94a62ef77074ba39eb74adec941"} Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.951442 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mxnkp" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.984124 4880 scope.go:117] "RemoveContainer" containerID="de9e0d5b52caf3a2fbf0feface417552680ba41db0b06aa98b13d3f0a6011811" Dec 01 03:01:23 crc kubenswrapper[4880]: I1201 03:01:23.989141 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mxnkp"] Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.003808 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mxnkp"] Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.017786 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k"] Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.055078 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8cw6k"] Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.496505 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.791748 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fe6389-fa84-4ec6-8891-0a379e0d4f29" path="/var/lib/kubelet/pods/14fe6389-fa84-4ec6-8891-0a379e0d4f29/volumes" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.793102 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183bf44d-c621-4f91-8ddc-10093cfc2596" path="/var/lib/kubelet/pods/183bf44d-c621-4f91-8ddc-10093cfc2596/volumes" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.837411 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf"] Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.837752 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerName="extract-content" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.837774 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerName="extract-content" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.837797 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerName="extract-utilities" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.837811 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerName="extract-utilities" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.837828 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.837842 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.837893 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerName="extract-utilities" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.837907 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerName="extract-utilities" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.837927 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0d64d0-7952-425c-95d5-5180ed5f588c" containerName="marketplace-operator" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.837940 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0d64d0-7952-425c-95d5-5180ed5f588c" containerName="marketplace-operator" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.837960 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.837972 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.837989 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerName="extract-utilities" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838001 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerName="extract-utilities" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.838020 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838034 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.838053 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerName="extract-utilities" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838065 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerName="extract-utilities" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.838082 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerName="extract-content" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838094 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerName="extract-content" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.838113 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14fe6389-fa84-4ec6-8891-0a379e0d4f29" containerName="route-controller-manager" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838125 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="14fe6389-fa84-4ec6-8891-0a379e0d4f29" containerName="route-controller-manager" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.838142 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838154 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.838172 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerName="extract-content" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838184 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerName="extract-content" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.838204 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183bf44d-c621-4f91-8ddc-10093cfc2596" containerName="controller-manager" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838216 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="183bf44d-c621-4f91-8ddc-10093cfc2596" containerName="controller-manager" Dec 01 03:01:24 crc kubenswrapper[4880]: E1201 03:01:24.838232 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerName="extract-content" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838244 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerName="extract-content" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838398 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d076e21e-9946-4ee4-9953-7c0a3830c0fc" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838418 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ddb3cc-a464-4645-8b0c-7475a9b75330" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838433 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="183bf44d-c621-4f91-8ddc-10093cfc2596" containerName="controller-manager" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838450 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a2d619-2948-4157-a5ab-a2e2a9247cc2" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838468 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83fde09-cd8f-4bcd-8ca3-0ba1a5acadc4" containerName="registry-server" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838484 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="14fe6389-fa84-4ec6-8891-0a379e0d4f29" containerName="route-controller-manager" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.838505 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0d64d0-7952-425c-95d5-5180ed5f588c" containerName="marketplace-operator" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.839252 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.843340 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr"] Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.843723 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.843945 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.844076 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.844608 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.844727 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.844766 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.844932 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.848696 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.849346 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.849551 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.849828 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.849905 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.850175 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.869825 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.870008 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf"] Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.875292 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr"] Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.987105 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-proxy-ca-bundles\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.987187 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-config\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.987253 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feec82d3-5527-464c-ae07-28a07d37189c-serving-cert\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.987326 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8vxf\" (UniqueName: \"kubernetes.io/projected/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-kube-api-access-x8vxf\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.987361 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-client-ca\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.987444 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-serving-cert\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.987501 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-client-ca\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.987579 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz84b\" (UniqueName: \"kubernetes.io/projected/feec82d3-5527-464c-ae07-28a07d37189c-kube-api-access-dz84b\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:24 crc kubenswrapper[4880]: I1201 03:01:24.987610 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-config\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.088777 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-serving-cert\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.088840 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-client-ca\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.088922 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz84b\" (UniqueName: \"kubernetes.io/projected/feec82d3-5527-464c-ae07-28a07d37189c-kube-api-access-dz84b\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.088965 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-config\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.089013 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-proxy-ca-bundles\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.089045 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-config\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.089076 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feec82d3-5527-464c-ae07-28a07d37189c-serving-cert\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.089120 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8vxf\" (UniqueName: \"kubernetes.io/projected/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-kube-api-access-x8vxf\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.089167 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-client-ca\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.090450 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-client-ca\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.090899 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-config\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.091252 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-client-ca\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.091914 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-proxy-ca-bundles\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.092978 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-config\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.094629 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feec82d3-5527-464c-ae07-28a07d37189c-serving-cert\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.104094 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-serving-cert\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.127540 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8vxf\" (UniqueName: \"kubernetes.io/projected/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-kube-api-access-x8vxf\") pod \"controller-manager-544f5b9bcd-2gdcr\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.130584 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz84b\" (UniqueName: \"kubernetes.io/projected/feec82d3-5527-464c-ae07-28a07d37189c-kube-api-access-dz84b\") pod \"route-controller-manager-7c7d557f8d-r2mdf\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.169094 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.184339 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.392204 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf"] Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.439550 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr"] Dec 01 03:01:25 crc kubenswrapper[4880]: W1201 03:01:25.444495 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa3403f9_1de6_4fa9_9a8b_d53206b8ca93.slice/crio-e1735981f5f0e988c25f8f174b760d30092c5cc62777458eb6d1c34f41e45fde WatchSource:0}: Error finding container e1735981f5f0e988c25f8f174b760d30092c5cc62777458eb6d1c34f41e45fde: Status 404 returned error can't find the container with id e1735981f5f0e988c25f8f174b760d30092c5cc62777458eb6d1c34f41e45fde Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.963769 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" event={"ID":"feec82d3-5527-464c-ae07-28a07d37189c","Type":"ContainerStarted","Data":"1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac"} Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.964117 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" event={"ID":"feec82d3-5527-464c-ae07-28a07d37189c","Type":"ContainerStarted","Data":"1c625bf5b263639300b652b123b72a54295df252fdd22490554753d329a27c43"} Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.964504 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.965158 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" event={"ID":"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93","Type":"ContainerStarted","Data":"d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3"} Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.965185 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" event={"ID":"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93","Type":"ContainerStarted","Data":"e1735981f5f0e988c25f8f174b760d30092c5cc62777458eb6d1c34f41e45fde"} Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.965683 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.974729 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:01:25 crc kubenswrapper[4880]: I1201 03:01:25.986073 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:01:26 crc kubenswrapper[4880]: I1201 03:01:26.005815 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" podStartSLOduration=4.005793064 podStartE2EDuration="4.005793064s" podCreationTimestamp="2025-12-01 03:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:01:25.985356514 +0000 UTC m=+315.496610906" watchObservedRunningTime="2025-12-01 03:01:26.005793064 +0000 UTC m=+315.517047446" Dec 01 03:01:26 crc kubenswrapper[4880]: I1201 03:01:26.046936 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" podStartSLOduration=4.046864438 podStartE2EDuration="4.046864438s" podCreationTimestamp="2025-12-01 03:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:01:26.043129611 +0000 UTC m=+315.554383993" watchObservedRunningTime="2025-12-01 03:01:26.046864438 +0000 UTC m=+315.558118810" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.036923 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dwgms"] Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.038265 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.044416 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.066463 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwgms"] Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.142735 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjxq\" (UniqueName: \"kubernetes.io/projected/1e7784cf-4c39-4f1e-b8c8-b4e26f167c22-kube-api-access-hkjxq\") pod \"redhat-operators-dwgms\" (UID: \"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22\") " pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.142998 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7784cf-4c39-4f1e-b8c8-b4e26f167c22-utilities\") pod \"redhat-operators-dwgms\" (UID: \"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22\") " pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.143106 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7784cf-4c39-4f1e-b8c8-b4e26f167c22-catalog-content\") pod \"redhat-operators-dwgms\" (UID: \"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22\") " pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.241984 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7cvrm"] Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.243124 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.243790 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkjxq\" (UniqueName: \"kubernetes.io/projected/1e7784cf-4c39-4f1e-b8c8-b4e26f167c22-kube-api-access-hkjxq\") pod \"redhat-operators-dwgms\" (UID: \"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22\") " pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.243835 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7784cf-4c39-4f1e-b8c8-b4e26f167c22-utilities\") pod \"redhat-operators-dwgms\" (UID: \"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22\") " pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.243878 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7784cf-4c39-4f1e-b8c8-b4e26f167c22-catalog-content\") pod \"redhat-operators-dwgms\" (UID: \"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22\") " pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.244354 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7784cf-4c39-4f1e-b8c8-b4e26f167c22-catalog-content\") pod \"redhat-operators-dwgms\" (UID: \"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22\") " pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.244443 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7784cf-4c39-4f1e-b8c8-b4e26f167c22-utilities\") pod \"redhat-operators-dwgms\" (UID: \"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22\") " pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.245553 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.264262 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cvrm"] Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.267712 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkjxq\" (UniqueName: \"kubernetes.io/projected/1e7784cf-4c39-4f1e-b8c8-b4e26f167c22-kube-api-access-hkjxq\") pod \"redhat-operators-dwgms\" (UID: \"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22\") " pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.345020 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33eaa3d6-064e-4fe4-91b5-c01791456cd7-catalog-content\") pod \"redhat-marketplace-7cvrm\" (UID: \"33eaa3d6-064e-4fe4-91b5-c01791456cd7\") " pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.345083 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k5gl\" (UniqueName: \"kubernetes.io/projected/33eaa3d6-064e-4fe4-91b5-c01791456cd7-kube-api-access-7k5gl\") pod \"redhat-marketplace-7cvrm\" (UID: \"33eaa3d6-064e-4fe4-91b5-c01791456cd7\") " pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.345110 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33eaa3d6-064e-4fe4-91b5-c01791456cd7-utilities\") pod \"redhat-marketplace-7cvrm\" (UID: \"33eaa3d6-064e-4fe4-91b5-c01791456cd7\") " pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.358168 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.446840 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33eaa3d6-064e-4fe4-91b5-c01791456cd7-catalog-content\") pod \"redhat-marketplace-7cvrm\" (UID: \"33eaa3d6-064e-4fe4-91b5-c01791456cd7\") " pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.447277 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k5gl\" (UniqueName: \"kubernetes.io/projected/33eaa3d6-064e-4fe4-91b5-c01791456cd7-kube-api-access-7k5gl\") pod \"redhat-marketplace-7cvrm\" (UID: \"33eaa3d6-064e-4fe4-91b5-c01791456cd7\") " pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.447307 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33eaa3d6-064e-4fe4-91b5-c01791456cd7-utilities\") pod \"redhat-marketplace-7cvrm\" (UID: \"33eaa3d6-064e-4fe4-91b5-c01791456cd7\") " pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.450623 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33eaa3d6-064e-4fe4-91b5-c01791456cd7-utilities\") pod \"redhat-marketplace-7cvrm\" (UID: \"33eaa3d6-064e-4fe4-91b5-c01791456cd7\") " pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.450740 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33eaa3d6-064e-4fe4-91b5-c01791456cd7-catalog-content\") pod \"redhat-marketplace-7cvrm\" (UID: \"33eaa3d6-064e-4fe4-91b5-c01791456cd7\") " pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.493148 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k5gl\" (UniqueName: \"kubernetes.io/projected/33eaa3d6-064e-4fe4-91b5-c01791456cd7-kube-api-access-7k5gl\") pod \"redhat-marketplace-7cvrm\" (UID: \"33eaa3d6-064e-4fe4-91b5-c01791456cd7\") " pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.556844 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.781582 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwgms"] Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.954843 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cvrm"] Dec 01 03:01:29 crc kubenswrapper[4880]: W1201 03:01:29.975400 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33eaa3d6_064e_4fe4_91b5_c01791456cd7.slice/crio-bb7589ab7b63b529e2828c7cfb13319c51a4d8b677ce24c6d6be18ca5babbc09 WatchSource:0}: Error finding container bb7589ab7b63b529e2828c7cfb13319c51a4d8b677ce24c6d6be18ca5babbc09: Status 404 returned error can't find the container with id bb7589ab7b63b529e2828c7cfb13319c51a4d8b677ce24c6d6be18ca5babbc09 Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.988112 4880 generic.go:334] "Generic (PLEG): container finished" podID="1e7784cf-4c39-4f1e-b8c8-b4e26f167c22" containerID="0f65a1c06f433c963ff421ca06ec62b542e029e3956b95575a379deb9f152ef6" exitCode=0 Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.988173 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwgms" event={"ID":"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22","Type":"ContainerDied","Data":"0f65a1c06f433c963ff421ca06ec62b542e029e3956b95575a379deb9f152ef6"} Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.988197 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwgms" event={"ID":"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22","Type":"ContainerStarted","Data":"e10a07c0bfe7547b82e6001f56e5f4811f39266d50bd1e72b4f11055fb9090f2"} Dec 01 03:01:29 crc kubenswrapper[4880]: I1201 03:01:29.989125 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cvrm" event={"ID":"33eaa3d6-064e-4fe4-91b5-c01791456cd7","Type":"ContainerStarted","Data":"bb7589ab7b63b529e2828c7cfb13319c51a4d8b677ce24c6d6be18ca5babbc09"} Dec 01 03:01:30 crc kubenswrapper[4880]: I1201 03:01:30.996162 4880 generic.go:334] "Generic (PLEG): container finished" podID="33eaa3d6-064e-4fe4-91b5-c01791456cd7" containerID="0e219121b58574cc9cc39735ab528b62199038982c302c03a2f37fd2c1cc0fb2" exitCode=0 Dec 01 03:01:30 crc kubenswrapper[4880]: I1201 03:01:30.996349 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cvrm" event={"ID":"33eaa3d6-064e-4fe4-91b5-c01791456cd7","Type":"ContainerDied","Data":"0e219121b58574cc9cc39735ab528b62199038982c302c03a2f37fd2c1cc0fb2"} Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.435234 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cxt8z"] Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.436153 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.438764 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.450202 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxt8z"] Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.474604 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ww7\" (UniqueName: \"kubernetes.io/projected/294de793-1333-48db-a07a-ef58f417ee76-kube-api-access-v9ww7\") pod \"certified-operators-cxt8z\" (UID: \"294de793-1333-48db-a07a-ef58f417ee76\") " pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.474812 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/294de793-1333-48db-a07a-ef58f417ee76-catalog-content\") pod \"certified-operators-cxt8z\" (UID: \"294de793-1333-48db-a07a-ef58f417ee76\") " pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.475031 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/294de793-1333-48db-a07a-ef58f417ee76-utilities\") pod \"certified-operators-cxt8z\" (UID: \"294de793-1333-48db-a07a-ef58f417ee76\") " pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.575428 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/294de793-1333-48db-a07a-ef58f417ee76-utilities\") pod \"certified-operators-cxt8z\" (UID: \"294de793-1333-48db-a07a-ef58f417ee76\") " pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.575485 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9ww7\" (UniqueName: \"kubernetes.io/projected/294de793-1333-48db-a07a-ef58f417ee76-kube-api-access-v9ww7\") pod \"certified-operators-cxt8z\" (UID: \"294de793-1333-48db-a07a-ef58f417ee76\") " pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.575527 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/294de793-1333-48db-a07a-ef58f417ee76-catalog-content\") pod \"certified-operators-cxt8z\" (UID: \"294de793-1333-48db-a07a-ef58f417ee76\") " pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.576012 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/294de793-1333-48db-a07a-ef58f417ee76-catalog-content\") pod \"certified-operators-cxt8z\" (UID: \"294de793-1333-48db-a07a-ef58f417ee76\") " pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.576378 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/294de793-1333-48db-a07a-ef58f417ee76-utilities\") pod \"certified-operators-cxt8z\" (UID: \"294de793-1333-48db-a07a-ef58f417ee76\") " pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.600847 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9ww7\" (UniqueName: \"kubernetes.io/projected/294de793-1333-48db-a07a-ef58f417ee76-kube-api-access-v9ww7\") pod \"certified-operators-cxt8z\" (UID: \"294de793-1333-48db-a07a-ef58f417ee76\") " pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.639865 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gkz2d"] Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.640798 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.642798 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.651823 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gkz2d"] Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.748914 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.777650 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9de599c8-a5fa-4879-b910-a0a45bf30b47-utilities\") pod \"community-operators-gkz2d\" (UID: \"9de599c8-a5fa-4879-b910-a0a45bf30b47\") " pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.777762 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wns6d\" (UniqueName: \"kubernetes.io/projected/9de599c8-a5fa-4879-b910-a0a45bf30b47-kube-api-access-wns6d\") pod \"community-operators-gkz2d\" (UID: \"9de599c8-a5fa-4879-b910-a0a45bf30b47\") " pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.777794 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9de599c8-a5fa-4879-b910-a0a45bf30b47-catalog-content\") pod \"community-operators-gkz2d\" (UID: \"9de599c8-a5fa-4879-b910-a0a45bf30b47\") " pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.879564 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9de599c8-a5fa-4879-b910-a0a45bf30b47-utilities\") pod \"community-operators-gkz2d\" (UID: \"9de599c8-a5fa-4879-b910-a0a45bf30b47\") " pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.879637 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wns6d\" (UniqueName: \"kubernetes.io/projected/9de599c8-a5fa-4879-b910-a0a45bf30b47-kube-api-access-wns6d\") pod \"community-operators-gkz2d\" (UID: \"9de599c8-a5fa-4879-b910-a0a45bf30b47\") " pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.879663 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9de599c8-a5fa-4879-b910-a0a45bf30b47-catalog-content\") pod \"community-operators-gkz2d\" (UID: \"9de599c8-a5fa-4879-b910-a0a45bf30b47\") " pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.880329 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9de599c8-a5fa-4879-b910-a0a45bf30b47-catalog-content\") pod \"community-operators-gkz2d\" (UID: \"9de599c8-a5fa-4879-b910-a0a45bf30b47\") " pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.880716 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9de599c8-a5fa-4879-b910-a0a45bf30b47-utilities\") pod \"community-operators-gkz2d\" (UID: \"9de599c8-a5fa-4879-b910-a0a45bf30b47\") " pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.904059 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wns6d\" (UniqueName: \"kubernetes.io/projected/9de599c8-a5fa-4879-b910-a0a45bf30b47-kube-api-access-wns6d\") pod \"community-operators-gkz2d\" (UID: \"9de599c8-a5fa-4879-b910-a0a45bf30b47\") " pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:31 crc kubenswrapper[4880]: I1201 03:01:31.953135 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:32 crc kubenswrapper[4880]: I1201 03:01:32.009053 4880 generic.go:334] "Generic (PLEG): container finished" podID="33eaa3d6-064e-4fe4-91b5-c01791456cd7" containerID="1677e115990bc7c952ce22666a336cb05f273941cadd63eb08328f09de92964a" exitCode=0 Dec 01 03:01:32 crc kubenswrapper[4880]: I1201 03:01:32.009121 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cvrm" event={"ID":"33eaa3d6-064e-4fe4-91b5-c01791456cd7","Type":"ContainerDied","Data":"1677e115990bc7c952ce22666a336cb05f273941cadd63eb08328f09de92964a"} Dec 01 03:01:32 crc kubenswrapper[4880]: I1201 03:01:32.030352 4880 generic.go:334] "Generic (PLEG): container finished" podID="1e7784cf-4c39-4f1e-b8c8-b4e26f167c22" containerID="b68ca38d3341596b401c7dace3c294026df0d6e9dc653be1e865b2593096d84a" exitCode=0 Dec 01 03:01:32 crc kubenswrapper[4880]: I1201 03:01:32.030392 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwgms" event={"ID":"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22","Type":"ContainerDied","Data":"b68ca38d3341596b401c7dace3c294026df0d6e9dc653be1e865b2593096d84a"} Dec 01 03:01:32 crc kubenswrapper[4880]: I1201 03:01:32.180614 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxt8z"] Dec 01 03:01:32 crc kubenswrapper[4880]: W1201 03:01:32.186930 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod294de793_1333_48db_a07a_ef58f417ee76.slice/crio-fb6392e757185e28f7c8ccc7f695eca7af3f26382b408ac14a7c23612ddb0155 WatchSource:0}: Error finding container fb6392e757185e28f7c8ccc7f695eca7af3f26382b408ac14a7c23612ddb0155: Status 404 returned error can't find the container with id fb6392e757185e28f7c8ccc7f695eca7af3f26382b408ac14a7c23612ddb0155 Dec 01 03:01:32 crc kubenswrapper[4880]: I1201 03:01:32.344456 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gkz2d"] Dec 01 03:01:32 crc kubenswrapper[4880]: W1201 03:01:32.357155 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9de599c8_a5fa_4879_b910_a0a45bf30b47.slice/crio-aad0432d4025d210e71d7031b868f614681559e5f8a5f606de226eece86c41e6 WatchSource:0}: Error finding container aad0432d4025d210e71d7031b868f614681559e5f8a5f606de226eece86c41e6: Status 404 returned error can't find the container with id aad0432d4025d210e71d7031b868f614681559e5f8a5f606de226eece86c41e6 Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.037523 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cvrm" event={"ID":"33eaa3d6-064e-4fe4-91b5-c01791456cd7","Type":"ContainerStarted","Data":"d44b0934c7c9e654d52e7597ec1a891749d40e01c8b58eef71109d9e3f8e67cf"} Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.038823 4880 generic.go:334] "Generic (PLEG): container finished" podID="9de599c8-a5fa-4879-b910-a0a45bf30b47" containerID="b568ff506f1cca69d3f70075ed99d2b10aab90a4212bf89030535ec4b0a0f150" exitCode=0 Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.038891 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz2d" event={"ID":"9de599c8-a5fa-4879-b910-a0a45bf30b47","Type":"ContainerDied","Data":"b568ff506f1cca69d3f70075ed99d2b10aab90a4212bf89030535ec4b0a0f150"} Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.038942 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz2d" event={"ID":"9de599c8-a5fa-4879-b910-a0a45bf30b47","Type":"ContainerStarted","Data":"aad0432d4025d210e71d7031b868f614681559e5f8a5f606de226eece86c41e6"} Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.041559 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwgms" event={"ID":"1e7784cf-4c39-4f1e-b8c8-b4e26f167c22","Type":"ContainerStarted","Data":"db7fc8837db903ea2cb3ddb43f6bbb55cb806346871ee5d41d3403598272f5f9"} Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.044580 4880 generic.go:334] "Generic (PLEG): container finished" podID="294de793-1333-48db-a07a-ef58f417ee76" containerID="f4a9ddd8114ae0b57a4933c90347aa59edf0915f8503602f1b94b7040974716a" exitCode=0 Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.044624 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxt8z" event={"ID":"294de793-1333-48db-a07a-ef58f417ee76","Type":"ContainerDied","Data":"f4a9ddd8114ae0b57a4933c90347aa59edf0915f8503602f1b94b7040974716a"} Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.044646 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxt8z" event={"ID":"294de793-1333-48db-a07a-ef58f417ee76","Type":"ContainerStarted","Data":"fb6392e757185e28f7c8ccc7f695eca7af3f26382b408ac14a7c23612ddb0155"} Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.065727 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7cvrm" podStartSLOduration=2.415447839 podStartE2EDuration="4.065705938s" podCreationTimestamp="2025-12-01 03:01:29 +0000 UTC" firstStartedPulling="2025-12-01 03:01:30.997812272 +0000 UTC m=+320.509066644" lastFinishedPulling="2025-12-01 03:01:32.648070371 +0000 UTC m=+322.159324743" observedRunningTime="2025-12-01 03:01:33.061355062 +0000 UTC m=+322.572609444" watchObservedRunningTime="2025-12-01 03:01:33.065705938 +0000 UTC m=+322.576960320" Dec 01 03:01:33 crc kubenswrapper[4880]: I1201 03:01:33.085440 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dwgms" podStartSLOduration=1.574028958 podStartE2EDuration="4.085423646s" podCreationTimestamp="2025-12-01 03:01:29 +0000 UTC" firstStartedPulling="2025-12-01 03:01:29.991774554 +0000 UTC m=+319.503028916" lastFinishedPulling="2025-12-01 03:01:32.503169232 +0000 UTC m=+322.014423604" observedRunningTime="2025-12-01 03:01:33.084108378 +0000 UTC m=+322.595362750" watchObservedRunningTime="2025-12-01 03:01:33.085423646 +0000 UTC m=+322.596678038" Dec 01 03:01:34 crc kubenswrapper[4880]: E1201 03:01:34.607973 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod294de793_1333_48db_a07a_ef58f417ee76.slice/crio-conmon-48d8bd1a50aa492c234baabbed56fc7f70611187601bc7fd6456499d6b8eb2b9.scope\": RecentStats: unable to find data in memory cache]" Dec 01 03:01:35 crc kubenswrapper[4880]: I1201 03:01:35.059414 4880 generic.go:334] "Generic (PLEG): container finished" podID="9de599c8-a5fa-4879-b910-a0a45bf30b47" containerID="8e13265ce668e8bae281e3fffa70483a069083e59c25f109a5ecf659daa44d63" exitCode=0 Dec 01 03:01:35 crc kubenswrapper[4880]: I1201 03:01:35.059639 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz2d" event={"ID":"9de599c8-a5fa-4879-b910-a0a45bf30b47","Type":"ContainerDied","Data":"8e13265ce668e8bae281e3fffa70483a069083e59c25f109a5ecf659daa44d63"} Dec 01 03:01:35 crc kubenswrapper[4880]: I1201 03:01:35.062993 4880 generic.go:334] "Generic (PLEG): container finished" podID="294de793-1333-48db-a07a-ef58f417ee76" containerID="48d8bd1a50aa492c234baabbed56fc7f70611187601bc7fd6456499d6b8eb2b9" exitCode=0 Dec 01 03:01:35 crc kubenswrapper[4880]: I1201 03:01:35.063046 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxt8z" event={"ID":"294de793-1333-48db-a07a-ef58f417ee76","Type":"ContainerDied","Data":"48d8bd1a50aa492c234baabbed56fc7f70611187601bc7fd6456499d6b8eb2b9"} Dec 01 03:01:36 crc kubenswrapper[4880]: I1201 03:01:36.069998 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxt8z" event={"ID":"294de793-1333-48db-a07a-ef58f417ee76","Type":"ContainerStarted","Data":"5105a41e3f575e17a9c334c5ddf9da115e8a84eab85ae9d37f0e67a750bea68e"} Dec 01 03:01:36 crc kubenswrapper[4880]: I1201 03:01:36.071969 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkz2d" event={"ID":"9de599c8-a5fa-4879-b910-a0a45bf30b47","Type":"ContainerStarted","Data":"114da48e61cd4d4025f924c08d8561442252545be85d31a11d868a04d4749f1e"} Dec 01 03:01:36 crc kubenswrapper[4880]: I1201 03:01:36.117034 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cxt8z" podStartSLOduration=2.5979695 podStartE2EDuration="5.117018728s" podCreationTimestamp="2025-12-01 03:01:31 +0000 UTC" firstStartedPulling="2025-12-01 03:01:33.046274717 +0000 UTC m=+322.557529089" lastFinishedPulling="2025-12-01 03:01:35.565323935 +0000 UTC m=+325.076578317" observedRunningTime="2025-12-01 03:01:36.099455462 +0000 UTC m=+325.610709844" watchObservedRunningTime="2025-12-01 03:01:36.117018728 +0000 UTC m=+325.628273100" Dec 01 03:01:39 crc kubenswrapper[4880]: I1201 03:01:39.358936 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:39 crc kubenswrapper[4880]: I1201 03:01:39.359607 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:39 crc kubenswrapper[4880]: I1201 03:01:39.400911 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:39 crc kubenswrapper[4880]: I1201 03:01:39.418591 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gkz2d" podStartSLOduration=5.835933554 podStartE2EDuration="8.418572256s" podCreationTimestamp="2025-12-01 03:01:31 +0000 UTC" firstStartedPulling="2025-12-01 03:01:33.040154381 +0000 UTC m=+322.551408763" lastFinishedPulling="2025-12-01 03:01:35.622793093 +0000 UTC m=+325.134047465" observedRunningTime="2025-12-01 03:01:36.120569451 +0000 UTC m=+325.631823823" watchObservedRunningTime="2025-12-01 03:01:39.418572256 +0000 UTC m=+328.929826618" Dec 01 03:01:39 crc kubenswrapper[4880]: I1201 03:01:39.557663 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:39 crc kubenswrapper[4880]: I1201 03:01:39.557730 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:39 crc kubenswrapper[4880]: I1201 03:01:39.624338 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:40 crc kubenswrapper[4880]: I1201 03:01:40.145477 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dwgms" Dec 01 03:01:40 crc kubenswrapper[4880]: I1201 03:01:40.154225 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7cvrm" Dec 01 03:01:41 crc kubenswrapper[4880]: I1201 03:01:41.749732 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:41 crc kubenswrapper[4880]: I1201 03:01:41.750178 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:41 crc kubenswrapper[4880]: I1201 03:01:41.833126 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:41 crc kubenswrapper[4880]: I1201 03:01:41.953930 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:41 crc kubenswrapper[4880]: I1201 03:01:41.953990 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:42 crc kubenswrapper[4880]: I1201 03:01:42.001866 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:42 crc kubenswrapper[4880]: I1201 03:01:42.165661 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gkz2d" Dec 01 03:01:42 crc kubenswrapper[4880]: I1201 03:01:42.181326 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cxt8z" Dec 01 03:01:44 crc kubenswrapper[4880]: E1201 03:01:44.749067 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Dec 01 03:01:54 crc kubenswrapper[4880]: E1201 03:01:54.885439 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Dec 01 03:02:02 crc kubenswrapper[4880]: I1201 03:02:02.532117 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr"] Dec 01 03:02:02 crc kubenswrapper[4880]: I1201 03:02:02.532616 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" podUID="aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" containerName="controller-manager" containerID="cri-o://d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3" gracePeriod=30 Dec 01 03:02:02 crc kubenswrapper[4880]: I1201 03:02:02.624507 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf"] Dec 01 03:02:02 crc kubenswrapper[4880]: I1201 03:02:02.624907 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" podUID="feec82d3-5527-464c-ae07-28a07d37189c" containerName="route-controller-manager" containerID="cri-o://1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac" gracePeriod=30 Dec 01 03:02:02 crc kubenswrapper[4880]: I1201 03:02:02.968305 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:02:02 crc kubenswrapper[4880]: I1201 03:02:02.973461 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.069950 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-client-ca\") pod \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.070034 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-config\") pod \"feec82d3-5527-464c-ae07-28a07d37189c\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.070074 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz84b\" (UniqueName: \"kubernetes.io/projected/feec82d3-5527-464c-ae07-28a07d37189c-kube-api-access-dz84b\") pod \"feec82d3-5527-464c-ae07-28a07d37189c\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.070113 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8vxf\" (UniqueName: \"kubernetes.io/projected/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-kube-api-access-x8vxf\") pod \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.070146 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-serving-cert\") pod \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.070215 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-client-ca\") pod \"feec82d3-5527-464c-ae07-28a07d37189c\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.070256 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-proxy-ca-bundles\") pod \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.070289 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feec82d3-5527-464c-ae07-28a07d37189c-serving-cert\") pod \"feec82d3-5527-464c-ae07-28a07d37189c\" (UID: \"feec82d3-5527-464c-ae07-28a07d37189c\") " Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.070332 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-config\") pod \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\" (UID: \"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93\") " Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.071060 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-client-ca" (OuterVolumeSpecName: "client-ca") pod "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" (UID: "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.071159 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" (UID: "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.071565 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-config" (OuterVolumeSpecName: "config") pod "feec82d3-5527-464c-ae07-28a07d37189c" (UID: "feec82d3-5527-464c-ae07-28a07d37189c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.071595 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-client-ca" (OuterVolumeSpecName: "client-ca") pod "feec82d3-5527-464c-ae07-28a07d37189c" (UID: "feec82d3-5527-464c-ae07-28a07d37189c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.071661 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-config" (OuterVolumeSpecName: "config") pod "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" (UID: "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.074834 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feec82d3-5527-464c-ae07-28a07d37189c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "feec82d3-5527-464c-ae07-28a07d37189c" (UID: "feec82d3-5527-464c-ae07-28a07d37189c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.074894 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feec82d3-5527-464c-ae07-28a07d37189c-kube-api-access-dz84b" (OuterVolumeSpecName: "kube-api-access-dz84b") pod "feec82d3-5527-464c-ae07-28a07d37189c" (UID: "feec82d3-5527-464c-ae07-28a07d37189c"). InnerVolumeSpecName "kube-api-access-dz84b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.075061 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" (UID: "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.076517 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-kube-api-access-x8vxf" (OuterVolumeSpecName: "kube-api-access-x8vxf") pod "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" (UID: "aa3403f9-1de6-4fa9-9a8b-d53206b8ca93"). InnerVolumeSpecName "kube-api-access-x8vxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.171806 4880 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.171857 4880 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.171881 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feec82d3-5527-464c-ae07-28a07d37189c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.171923 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.171941 4880 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.171990 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feec82d3-5527-464c-ae07-28a07d37189c-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.172008 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz84b\" (UniqueName: \"kubernetes.io/projected/feec82d3-5527-464c-ae07-28a07d37189c-kube-api-access-dz84b\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.172027 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8vxf\" (UniqueName: \"kubernetes.io/projected/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-kube-api-access-x8vxf\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.172043 4880 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.235175 4880 generic.go:334] "Generic (PLEG): container finished" podID="feec82d3-5527-464c-ae07-28a07d37189c" containerID="1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac" exitCode=0 Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.235280 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.235298 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" event={"ID":"feec82d3-5527-464c-ae07-28a07d37189c","Type":"ContainerDied","Data":"1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac"} Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.235343 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf" event={"ID":"feec82d3-5527-464c-ae07-28a07d37189c","Type":"ContainerDied","Data":"1c625bf5b263639300b652b123b72a54295df252fdd22490554753d329a27c43"} Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.235372 4880 scope.go:117] "RemoveContainer" containerID="1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.244186 4880 generic.go:334] "Generic (PLEG): container finished" podID="aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" containerID="d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3" exitCode=0 Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.244231 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.244254 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" event={"ID":"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93","Type":"ContainerDied","Data":"d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3"} Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.244636 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr" event={"ID":"aa3403f9-1de6-4fa9-9a8b-d53206b8ca93","Type":"ContainerDied","Data":"e1735981f5f0e988c25f8f174b760d30092c5cc62777458eb6d1c34f41e45fde"} Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.277760 4880 scope.go:117] "RemoveContainer" containerID="1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac" Dec 01 03:02:03 crc kubenswrapper[4880]: E1201 03:02:03.292336 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac\": container with ID starting with 1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac not found: ID does not exist" containerID="1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.292700 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac"} err="failed to get container status \"1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac\": rpc error: code = NotFound desc = could not find container \"1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac\": container with ID starting with 1af51344fab79d43b0e8595cb9b694e76ebbe1ca948f339be6a94d6afec797ac not found: ID does not exist" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.293103 4880 scope.go:117] "RemoveContainer" containerID="d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.300931 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf"] Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.326569 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-r2mdf"] Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.328178 4880 scope.go:117] "RemoveContainer" containerID="d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3" Dec 01 03:02:03 crc kubenswrapper[4880]: E1201 03:02:03.329011 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3\": container with ID starting with d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3 not found: ID does not exist" containerID="d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.329075 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3"} err="failed to get container status \"d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3\": rpc error: code = NotFound desc = could not find container \"d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3\": container with ID starting with d9d52913be987fbf4c640974a842f1b0cda449667bf587cfe22efa4f04c479e3 not found: ID does not exist" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.334631 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr"] Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.341192 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-544f5b9bcd-2gdcr"] Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.862574 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp"] Dec 01 03:02:03 crc kubenswrapper[4880]: E1201 03:02:03.862918 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feec82d3-5527-464c-ae07-28a07d37189c" containerName="route-controller-manager" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.862939 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="feec82d3-5527-464c-ae07-28a07d37189c" containerName="route-controller-manager" Dec 01 03:02:03 crc kubenswrapper[4880]: E1201 03:02:03.862956 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" containerName="controller-manager" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.862968 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" containerName="controller-manager" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.863145 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="feec82d3-5527-464c-ae07-28a07d37189c" containerName="route-controller-manager" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.863176 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" containerName="controller-manager" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.863754 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.866014 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.867470 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.867942 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.868476 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.869215 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c5c9f856-dc762"] Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.870301 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.871269 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.871281 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.873407 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.873950 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.874792 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.874847 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.874993 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.875269 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.885610 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp"] Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.888605 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.892672 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c5c9f856-dc762"] Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.982153 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfs2j\" (UniqueName: \"kubernetes.io/projected/fdf782f0-07e1-47b7-ac74-be0be46f561a-kube-api-access-wfs2j\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.982210 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkfm5\" (UniqueName: \"kubernetes.io/projected/458f4e63-750d-47e9-8c32-8fb4d3857bc8-kube-api-access-qkfm5\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.982251 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/458f4e63-750d-47e9-8c32-8fb4d3857bc8-client-ca\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.982274 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf782f0-07e1-47b7-ac74-be0be46f561a-serving-cert\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.982328 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fdf782f0-07e1-47b7-ac74-be0be46f561a-client-ca\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.982351 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/458f4e63-750d-47e9-8c32-8fb4d3857bc8-serving-cert\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.982377 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf782f0-07e1-47b7-ac74-be0be46f561a-config\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.982405 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458f4e63-750d-47e9-8c32-8fb4d3857bc8-config\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:03 crc kubenswrapper[4880]: I1201 03:02:03.982426 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fdf782f0-07e1-47b7-ac74-be0be46f561a-proxy-ca-bundles\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.083969 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/458f4e63-750d-47e9-8c32-8fb4d3857bc8-serving-cert\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.084058 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf782f0-07e1-47b7-ac74-be0be46f561a-config\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.084114 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458f4e63-750d-47e9-8c32-8fb4d3857bc8-config\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.084150 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fdf782f0-07e1-47b7-ac74-be0be46f561a-proxy-ca-bundles\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.084206 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfs2j\" (UniqueName: \"kubernetes.io/projected/fdf782f0-07e1-47b7-ac74-be0be46f561a-kube-api-access-wfs2j\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.084277 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkfm5\" (UniqueName: \"kubernetes.io/projected/458f4e63-750d-47e9-8c32-8fb4d3857bc8-kube-api-access-qkfm5\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.084343 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/458f4e63-750d-47e9-8c32-8fb4d3857bc8-client-ca\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.084383 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf782f0-07e1-47b7-ac74-be0be46f561a-serving-cert\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.084466 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fdf782f0-07e1-47b7-ac74-be0be46f561a-client-ca\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.086187 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fdf782f0-07e1-47b7-ac74-be0be46f561a-client-ca\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.086425 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf782f0-07e1-47b7-ac74-be0be46f561a-config\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.087102 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fdf782f0-07e1-47b7-ac74-be0be46f561a-proxy-ca-bundles\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.087198 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/458f4e63-750d-47e9-8c32-8fb4d3857bc8-config\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.087421 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/458f4e63-750d-47e9-8c32-8fb4d3857bc8-client-ca\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.092323 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf782f0-07e1-47b7-ac74-be0be46f561a-serving-cert\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.095975 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/458f4e63-750d-47e9-8c32-8fb4d3857bc8-serving-cert\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.119488 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfs2j\" (UniqueName: \"kubernetes.io/projected/fdf782f0-07e1-47b7-ac74-be0be46f561a-kube-api-access-wfs2j\") pod \"controller-manager-c5c9f856-dc762\" (UID: \"fdf782f0-07e1-47b7-ac74-be0be46f561a\") " pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.124088 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkfm5\" (UniqueName: \"kubernetes.io/projected/458f4e63-750d-47e9-8c32-8fb4d3857bc8-kube-api-access-qkfm5\") pod \"route-controller-manager-56bfbcdd6c-59vgp\" (UID: \"458f4e63-750d-47e9-8c32-8fb4d3857bc8\") " pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.187977 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.200584 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.688446 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp"] Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.733359 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c5c9f856-dc762"] Dec 01 03:02:04 crc kubenswrapper[4880]: W1201 03:02:04.746224 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdf782f0_07e1_47b7_ac74_be0be46f561a.slice/crio-b4e5a8a8aeb53e961a3901081af586c0dca374c94b747e060bcb8bd09e3ca16c WatchSource:0}: Error finding container b4e5a8a8aeb53e961a3901081af586c0dca374c94b747e060bcb8bd09e3ca16c: Status 404 returned error can't find the container with id b4e5a8a8aeb53e961a3901081af586c0dca374c94b747e060bcb8bd09e3ca16c Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.795241 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3403f9-1de6-4fa9-9a8b-d53206b8ca93" path="/var/lib/kubelet/pods/aa3403f9-1de6-4fa9-9a8b-d53206b8ca93/volumes" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.796613 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feec82d3-5527-464c-ae07-28a07d37189c" path="/var/lib/kubelet/pods/feec82d3-5527-464c-ae07-28a07d37189c/volumes" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.948368 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zshq6"] Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.948987 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.961483 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zshq6"] Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.995936 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/20f28491-57ec-42df-b233-c68514f6ed57-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.995987 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20f28491-57ec-42df-b233-c68514f6ed57-bound-sa-token\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.996006 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20f28491-57ec-42df-b233-c68514f6ed57-trusted-ca\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.996027 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/20f28491-57ec-42df-b233-c68514f6ed57-registry-certificates\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.996058 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.996075 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn68x\" (UniqueName: \"kubernetes.io/projected/20f28491-57ec-42df-b233-c68514f6ed57-kube-api-access-fn68x\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.996098 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/20f28491-57ec-42df-b233-c68514f6ed57-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:04 crc kubenswrapper[4880]: I1201 03:02:04.996115 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/20f28491-57ec-42df-b233-c68514f6ed57-registry-tls\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:04 crc kubenswrapper[4880]: E1201 03:02:04.996740 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.051917 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.096865 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/20f28491-57ec-42df-b233-c68514f6ed57-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.096938 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20f28491-57ec-42df-b233-c68514f6ed57-bound-sa-token\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.096957 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20f28491-57ec-42df-b233-c68514f6ed57-trusted-ca\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.096980 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/20f28491-57ec-42df-b233-c68514f6ed57-registry-certificates\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.097009 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn68x\" (UniqueName: \"kubernetes.io/projected/20f28491-57ec-42df-b233-c68514f6ed57-kube-api-access-fn68x\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.097035 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/20f28491-57ec-42df-b233-c68514f6ed57-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.097051 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/20f28491-57ec-42df-b233-c68514f6ed57-registry-tls\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.097479 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/20f28491-57ec-42df-b233-c68514f6ed57-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.098372 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20f28491-57ec-42df-b233-c68514f6ed57-trusted-ca\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.098441 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/20f28491-57ec-42df-b233-c68514f6ed57-registry-certificates\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.102692 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/20f28491-57ec-42df-b233-c68514f6ed57-registry-tls\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.103084 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/20f28491-57ec-42df-b233-c68514f6ed57-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.118770 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn68x\" (UniqueName: \"kubernetes.io/projected/20f28491-57ec-42df-b233-c68514f6ed57-kube-api-access-fn68x\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.118935 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20f28491-57ec-42df-b233-c68514f6ed57-bound-sa-token\") pod \"image-registry-66df7c8f76-zshq6\" (UID: \"20f28491-57ec-42df-b233-c68514f6ed57\") " pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.260428 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.264053 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" event={"ID":"fdf782f0-07e1-47b7-ac74-be0be46f561a","Type":"ContainerStarted","Data":"3c42b42957e8325546fd9df2a3e85f0350edede22fd0f0a4d9085c51db69a0e6"} Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.264096 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" event={"ID":"fdf782f0-07e1-47b7-ac74-be0be46f561a","Type":"ContainerStarted","Data":"b4e5a8a8aeb53e961a3901081af586c0dca374c94b747e060bcb8bd09e3ca16c"} Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.264423 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.265139 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" event={"ID":"458f4e63-750d-47e9-8c32-8fb4d3857bc8","Type":"ContainerStarted","Data":"d34d28f5881de4637bbe395b20f5ac8c51ab4e797e9867e5d0305694ae866f96"} Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.265168 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" event={"ID":"458f4e63-750d-47e9-8c32-8fb4d3857bc8","Type":"ContainerStarted","Data":"c48d44512fc3ee17ca1245c214ff318937c9d26a06e5a08af737ca847e5cf58d"} Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.266608 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.282049 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.284123 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c5c9f856-dc762" podStartSLOduration=3.284105626 podStartE2EDuration="3.284105626s" podCreationTimestamp="2025-12-01 03:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:02:05.281276132 +0000 UTC m=+354.792530524" watchObservedRunningTime="2025-12-01 03:02:05.284105626 +0000 UTC m=+354.795359998" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.312821 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" podStartSLOduration=3.312803918 podStartE2EDuration="3.312803918s" podCreationTimestamp="2025-12-01 03:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:02:05.309139223 +0000 UTC m=+354.820393595" watchObservedRunningTime="2025-12-01 03:02:05.312803918 +0000 UTC m=+354.824058290" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.328030 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56bfbcdd6c-59vgp" Dec 01 03:02:05 crc kubenswrapper[4880]: I1201 03:02:05.803745 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zshq6"] Dec 01 03:02:06 crc kubenswrapper[4880]: I1201 03:02:06.273660 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" event={"ID":"20f28491-57ec-42df-b233-c68514f6ed57","Type":"ContainerStarted","Data":"e9ac14fa9540d71cb665cf6b48d16488ddff5ba33b64c4c5e006786865df8dae"} Dec 01 03:02:06 crc kubenswrapper[4880]: I1201 03:02:06.273720 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" event={"ID":"20f28491-57ec-42df-b233-c68514f6ed57","Type":"ContainerStarted","Data":"cc891b7399d9aa8f74cc519ab96573a6f9e4475d854af2cc3ef8d23bef1c55e7"} Dec 01 03:02:06 crc kubenswrapper[4880]: I1201 03:02:06.274129 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:06 crc kubenswrapper[4880]: I1201 03:02:06.291546 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" podStartSLOduration=2.291531523 podStartE2EDuration="2.291531523s" podCreationTimestamp="2025-12-01 03:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:02:06.290658171 +0000 UTC m=+355.801912543" watchObservedRunningTime="2025-12-01 03:02:06.291531523 +0000 UTC m=+355.802785895" Dec 01 03:02:17 crc kubenswrapper[4880]: I1201 03:02:17.368832 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:02:17 crc kubenswrapper[4880]: I1201 03:02:17.369655 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:02:25 crc kubenswrapper[4880]: I1201 03:02:25.270993 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-zshq6" Dec 01 03:02:25 crc kubenswrapper[4880]: I1201 03:02:25.331004 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zld26"] Dec 01 03:02:47 crc kubenswrapper[4880]: I1201 03:02:47.369340 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:02:47 crc kubenswrapper[4880]: I1201 03:02:47.371255 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:02:50 crc kubenswrapper[4880]: I1201 03:02:50.397793 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" podUID="74dde675-4516-4165-badb-d7233a017fe1" containerName="registry" containerID="cri-o://7dfd91502f8986ac343e40e5e74e0135d31e7d78e014e2e1d1c40ff7c48a36cf" gracePeriod=30 Dec 01 03:02:50 crc kubenswrapper[4880]: I1201 03:02:50.561817 4880 generic.go:334] "Generic (PLEG): container finished" podID="74dde675-4516-4165-badb-d7233a017fe1" containerID="7dfd91502f8986ac343e40e5e74e0135d31e7d78e014e2e1d1c40ff7c48a36cf" exitCode=0 Dec 01 03:02:50 crc kubenswrapper[4880]: I1201 03:02:50.561860 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" event={"ID":"74dde675-4516-4165-badb-d7233a017fe1","Type":"ContainerDied","Data":"7dfd91502f8986ac343e40e5e74e0135d31e7d78e014e2e1d1c40ff7c48a36cf"} Dec 01 03:02:50 crc kubenswrapper[4880]: I1201 03:02:50.883139 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.067656 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-registry-certificates\") pod \"74dde675-4516-4165-badb-d7233a017fe1\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.067692 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94zx2\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-kube-api-access-94zx2\") pod \"74dde675-4516-4165-badb-d7233a017fe1\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.067837 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"74dde675-4516-4165-badb-d7233a017fe1\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.067859 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-registry-tls\") pod \"74dde675-4516-4165-badb-d7233a017fe1\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.067938 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74dde675-4516-4165-badb-d7233a017fe1-ca-trust-extracted\") pod \"74dde675-4516-4165-badb-d7233a017fe1\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.067958 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74dde675-4516-4165-badb-d7233a017fe1-installation-pull-secrets\") pod \"74dde675-4516-4165-badb-d7233a017fe1\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.067977 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-bound-sa-token\") pod \"74dde675-4516-4165-badb-d7233a017fe1\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.068117 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-trusted-ca\") pod \"74dde675-4516-4165-badb-d7233a017fe1\" (UID: \"74dde675-4516-4165-badb-d7233a017fe1\") " Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.069700 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "74dde675-4516-4165-badb-d7233a017fe1" (UID: "74dde675-4516-4165-badb-d7233a017fe1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.069958 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "74dde675-4516-4165-badb-d7233a017fe1" (UID: "74dde675-4516-4165-badb-d7233a017fe1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.074313 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-kube-api-access-94zx2" (OuterVolumeSpecName: "kube-api-access-94zx2") pod "74dde675-4516-4165-badb-d7233a017fe1" (UID: "74dde675-4516-4165-badb-d7233a017fe1"). InnerVolumeSpecName "kube-api-access-94zx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.075774 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74dde675-4516-4165-badb-d7233a017fe1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "74dde675-4516-4165-badb-d7233a017fe1" (UID: "74dde675-4516-4165-badb-d7233a017fe1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.076033 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "74dde675-4516-4165-badb-d7233a017fe1" (UID: "74dde675-4516-4165-badb-d7233a017fe1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.079346 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "74dde675-4516-4165-badb-d7233a017fe1" (UID: "74dde675-4516-4165-badb-d7233a017fe1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.089622 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "74dde675-4516-4165-badb-d7233a017fe1" (UID: "74dde675-4516-4165-badb-d7233a017fe1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.107278 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74dde675-4516-4165-badb-d7233a017fe1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "74dde675-4516-4165-badb-d7233a017fe1" (UID: "74dde675-4516-4165-badb-d7233a017fe1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.170383 4880 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.170587 4880 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74dde675-4516-4165-badb-d7233a017fe1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.170666 4880 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.170731 4880 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74dde675-4516-4165-badb-d7233a017fe1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.170814 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.171620 4880 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74dde675-4516-4165-badb-d7233a017fe1-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.171674 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94zx2\" (UniqueName: \"kubernetes.io/projected/74dde675-4516-4165-badb-d7233a017fe1-kube-api-access-94zx2\") on node \"crc\" DevicePath \"\"" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.569501 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" event={"ID":"74dde675-4516-4165-badb-d7233a017fe1","Type":"ContainerDied","Data":"f6990c0605c627a1605ed253e259548380cce104db5507e3e52c5249496598fe"} Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.569552 4880 scope.go:117] "RemoveContainer" containerID="7dfd91502f8986ac343e40e5e74e0135d31e7d78e014e2e1d1c40ff7c48a36cf" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.569560 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zld26" Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.614677 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zld26"] Dec 01 03:02:51 crc kubenswrapper[4880]: I1201 03:02:51.618927 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zld26"] Dec 01 03:02:52 crc kubenswrapper[4880]: I1201 03:02:52.797862 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74dde675-4516-4165-badb-d7233a017fe1" path="/var/lib/kubelet/pods/74dde675-4516-4165-badb-d7233a017fe1/volumes" Dec 01 03:03:17 crc kubenswrapper[4880]: I1201 03:03:17.368694 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:03:17 crc kubenswrapper[4880]: I1201 03:03:17.369373 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:03:17 crc kubenswrapper[4880]: I1201 03:03:17.369479 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:03:17 crc kubenswrapper[4880]: I1201 03:03:17.370272 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a2f470a436bc91169f368a1c915cb04ff7c639b62bb02c7613be50a7734fc88"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:03:17 crc kubenswrapper[4880]: I1201 03:03:17.370374 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://1a2f470a436bc91169f368a1c915cb04ff7c639b62bb02c7613be50a7734fc88" gracePeriod=600 Dec 01 03:03:17 crc kubenswrapper[4880]: I1201 03:03:17.782026 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="1a2f470a436bc91169f368a1c915cb04ff7c639b62bb02c7613be50a7734fc88" exitCode=0 Dec 01 03:03:17 crc kubenswrapper[4880]: I1201 03:03:17.782123 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"1a2f470a436bc91169f368a1c915cb04ff7c639b62bb02c7613be50a7734fc88"} Dec 01 03:03:17 crc kubenswrapper[4880]: I1201 03:03:17.782429 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"8263d28df2319abe6fc219ddd5fbeb3c45e1155279155d321212f1fcda96f18d"} Dec 01 03:03:17 crc kubenswrapper[4880]: I1201 03:03:17.782517 4880 scope.go:117] "RemoveContainer" containerID="9e58fa4b6be0480ecdbf949557de044091b932fa9eb736bc8b60295f64696a37" Dec 01 03:05:17 crc kubenswrapper[4880]: I1201 03:05:17.368682 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:05:17 crc kubenswrapper[4880]: I1201 03:05:17.369443 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:05:47 crc kubenswrapper[4880]: I1201 03:05:47.368738 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:05:47 crc kubenswrapper[4880]: I1201 03:05:47.369338 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.206328 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vv8cb"] Dec 01 03:06:17 crc kubenswrapper[4880]: E1201 03:06:17.207072 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74dde675-4516-4165-badb-d7233a017fe1" containerName="registry" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.207087 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="74dde675-4516-4165-badb-d7233a017fe1" containerName="registry" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.207225 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="74dde675-4516-4165-badb-d7233a017fe1" containerName="registry" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.207659 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vv8cb" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.210001 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.210052 4880 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5wcgm" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.217046 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vv8cb"] Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.217578 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.226228 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dkggz"] Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.226852 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-dkggz" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.229333 4880 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9kkbk" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.244042 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dkggz"] Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.256770 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-7d677"] Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.257367 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.262093 4880 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-f2wpw" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.270997 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-7d677"] Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.330174 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cth4j\" (UniqueName: \"kubernetes.io/projected/4695fcb6-9fec-4979-8458-60b4e9c85994-kube-api-access-cth4j\") pod \"cert-manager-cainjector-7f985d654d-vv8cb\" (UID: \"4695fcb6-9fec-4979-8458-60b4e9c85994\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vv8cb" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.330416 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtpk\" (UniqueName: \"kubernetes.io/projected/e6d6f8d4-c44b-4327-8e78-3ef4e0c419d4-kube-api-access-cxtpk\") pod \"cert-manager-5b446d88c5-dkggz\" (UID: \"e6d6f8d4-c44b-4327-8e78-3ef4e0c419d4\") " pod="cert-manager/cert-manager-5b446d88c5-dkggz" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.369589 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.369824 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.369944 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.370524 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8263d28df2319abe6fc219ddd5fbeb3c45e1155279155d321212f1fcda96f18d"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.370637 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://8263d28df2319abe6fc219ddd5fbeb3c45e1155279155d321212f1fcda96f18d" gracePeriod=600 Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.431566 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj5wp\" (UniqueName: \"kubernetes.io/projected/acdd9a5d-93cc-489d-930f-96100c5b89f8-kube-api-access-bj5wp\") pod \"cert-manager-webhook-5655c58dd6-7d677\" (UID: \"acdd9a5d-93cc-489d-930f-96100c5b89f8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.432083 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxtpk\" (UniqueName: \"kubernetes.io/projected/e6d6f8d4-c44b-4327-8e78-3ef4e0c419d4-kube-api-access-cxtpk\") pod \"cert-manager-5b446d88c5-dkggz\" (UID: \"e6d6f8d4-c44b-4327-8e78-3ef4e0c419d4\") " pod="cert-manager/cert-manager-5b446d88c5-dkggz" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.432185 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cth4j\" (UniqueName: \"kubernetes.io/projected/4695fcb6-9fec-4979-8458-60b4e9c85994-kube-api-access-cth4j\") pod \"cert-manager-cainjector-7f985d654d-vv8cb\" (UID: \"4695fcb6-9fec-4979-8458-60b4e9c85994\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vv8cb" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.455521 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxtpk\" (UniqueName: \"kubernetes.io/projected/e6d6f8d4-c44b-4327-8e78-3ef4e0c419d4-kube-api-access-cxtpk\") pod \"cert-manager-5b446d88c5-dkggz\" (UID: \"e6d6f8d4-c44b-4327-8e78-3ef4e0c419d4\") " pod="cert-manager/cert-manager-5b446d88c5-dkggz" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.455772 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cth4j\" (UniqueName: \"kubernetes.io/projected/4695fcb6-9fec-4979-8458-60b4e9c85994-kube-api-access-cth4j\") pod \"cert-manager-cainjector-7f985d654d-vv8cb\" (UID: \"4695fcb6-9fec-4979-8458-60b4e9c85994\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vv8cb" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.522072 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vv8cb" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.535749 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj5wp\" (UniqueName: \"kubernetes.io/projected/acdd9a5d-93cc-489d-930f-96100c5b89f8-kube-api-access-bj5wp\") pod \"cert-manager-webhook-5655c58dd6-7d677\" (UID: \"acdd9a5d-93cc-489d-930f-96100c5b89f8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.539318 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-dkggz" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.554000 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj5wp\" (UniqueName: \"kubernetes.io/projected/acdd9a5d-93cc-489d-930f-96100c5b89f8-kube-api-access-bj5wp\") pod \"cert-manager-webhook-5655c58dd6-7d677\" (UID: \"acdd9a5d-93cc-489d-930f-96100c5b89f8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.570760 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.773251 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vv8cb"] Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.794412 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.814266 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-7d677"] Dec 01 03:06:17 crc kubenswrapper[4880]: W1201 03:06:17.816742 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacdd9a5d_93cc_489d_930f_96100c5b89f8.slice/crio-bfd9cbad8f612e011ee442bd74e7d2ffd43372b9ae1a14d5e5a45a5d2ea09635 WatchSource:0}: Error finding container bfd9cbad8f612e011ee442bd74e7d2ffd43372b9ae1a14d5e5a45a5d2ea09635: Status 404 returned error can't find the container with id bfd9cbad8f612e011ee442bd74e7d2ffd43372b9ae1a14d5e5a45a5d2ea09635 Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.852272 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dkggz"] Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.963631 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-dkggz" event={"ID":"e6d6f8d4-c44b-4327-8e78-3ef4e0c419d4","Type":"ContainerStarted","Data":"bb3004afbfb1ecab91d3cf5aa5ebc263f3fdff9606a5a8121e1beed7ef0f387b"} Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.974898 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vv8cb" event={"ID":"4695fcb6-9fec-4979-8458-60b4e9c85994","Type":"ContainerStarted","Data":"e27eab7758be9826025e80027726d513cac8cd8ffb05fe6375760077f7b1a77d"} Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.978745 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" event={"ID":"acdd9a5d-93cc-489d-930f-96100c5b89f8","Type":"ContainerStarted","Data":"bfd9cbad8f612e011ee442bd74e7d2ffd43372b9ae1a14d5e5a45a5d2ea09635"} Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.980903 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="8263d28df2319abe6fc219ddd5fbeb3c45e1155279155d321212f1fcda96f18d" exitCode=0 Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.980933 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"8263d28df2319abe6fc219ddd5fbeb3c45e1155279155d321212f1fcda96f18d"} Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.980951 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"9e7e08cb7118ecb74645ada937d5a46b564fd983b8d99301ceea950ba427688d"} Dec 01 03:06:17 crc kubenswrapper[4880]: I1201 03:06:17.980967 4880 scope.go:117] "RemoveContainer" containerID="1a2f470a436bc91169f368a1c915cb04ff7c639b62bb02c7613be50a7734fc88" Dec 01 03:06:22 crc kubenswrapper[4880]: I1201 03:06:22.008085 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vv8cb" event={"ID":"4695fcb6-9fec-4979-8458-60b4e9c85994","Type":"ContainerStarted","Data":"00dee93eb32b642ce433dac616f5f235559303fd7a7d8fcc8fdd56016bd7c4a7"} Dec 01 03:06:22 crc kubenswrapper[4880]: I1201 03:06:22.012914 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" event={"ID":"acdd9a5d-93cc-489d-930f-96100c5b89f8","Type":"ContainerStarted","Data":"2424988ba241b79a4d2d7609989b4f46b8db7ee745ddd9a92c48ce32b7f437ef"} Dec 01 03:06:22 crc kubenswrapper[4880]: I1201 03:06:22.013311 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" Dec 01 03:06:22 crc kubenswrapper[4880]: I1201 03:06:22.016042 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-dkggz" event={"ID":"e6d6f8d4-c44b-4327-8e78-3ef4e0c419d4","Type":"ContainerStarted","Data":"d07b86aaa21327fdc96258a84a4c66a8c6a91a67f58c60d47614d4c297d5c328"} Dec 01 03:06:22 crc kubenswrapper[4880]: I1201 03:06:22.030687 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-vv8cb" podStartSLOduration=1.702471083 podStartE2EDuration="5.03067045s" podCreationTimestamp="2025-12-01 03:06:17 +0000 UTC" firstStartedPulling="2025-12-01 03:06:17.7942246 +0000 UTC m=+607.305478972" lastFinishedPulling="2025-12-01 03:06:21.122423967 +0000 UTC m=+610.633678339" observedRunningTime="2025-12-01 03:06:22.027411205 +0000 UTC m=+611.538665617" watchObservedRunningTime="2025-12-01 03:06:22.03067045 +0000 UTC m=+611.541924832" Dec 01 03:06:22 crc kubenswrapper[4880]: I1201 03:06:22.052746 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-dkggz" podStartSLOduration=1.877975274 podStartE2EDuration="5.05272742s" podCreationTimestamp="2025-12-01 03:06:17 +0000 UTC" firstStartedPulling="2025-12-01 03:06:17.86417481 +0000 UTC m=+607.375429182" lastFinishedPulling="2025-12-01 03:06:21.038926926 +0000 UTC m=+610.550181328" observedRunningTime="2025-12-01 03:06:22.049717513 +0000 UTC m=+611.560971915" watchObservedRunningTime="2025-12-01 03:06:22.05272742 +0000 UTC m=+611.563981802" Dec 01 03:06:22 crc kubenswrapper[4880]: I1201 03:06:22.077295 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" podStartSLOduration=1.862320829 podStartE2EDuration="5.077276496s" podCreationTimestamp="2025-12-01 03:06:17 +0000 UTC" firstStartedPulling="2025-12-01 03:06:17.819324769 +0000 UTC m=+607.330579141" lastFinishedPulling="2025-12-01 03:06:21.034280406 +0000 UTC m=+610.545534808" observedRunningTime="2025-12-01 03:06:22.076636549 +0000 UTC m=+611.587890941" watchObservedRunningTime="2025-12-01 03:06:22.077276496 +0000 UTC m=+611.588530888" Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.575106 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-7d677" Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.600688 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-52bx6"] Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.601556 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovn-controller" containerID="cri-o://805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027" gracePeriod=30 Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.601588 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="northd" containerID="cri-o://2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d" gracePeriod=30 Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.601775 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1" gracePeriod=30 Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.601824 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="sbdb" containerID="cri-o://4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3" gracePeriod=30 Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.601925 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kube-rbac-proxy-node" containerID="cri-o://df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad" gracePeriod=30 Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.602014 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="nbdb" containerID="cri-o://75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41" gracePeriod=30 Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.602033 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovn-acl-logging" containerID="cri-o://8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89" gracePeriod=30 Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.662178 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" containerID="cri-o://06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468" gracePeriod=30 Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.970294 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/3.log" Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.973535 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovn-acl-logging/0.log" Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.974197 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovn-controller/0.log" Dec 01 03:06:27 crc kubenswrapper[4880]: I1201 03:06:27.974805 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.046738 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h6xdx"] Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.046929 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="northd" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.046940 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="northd" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.046951 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.046958 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.046965 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kubecfg-setup" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.046972 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kubecfg-setup" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.046981 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kube-rbac-proxy-node" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.046987 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kube-rbac-proxy-node" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.046994 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.046999 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.047009 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047014 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.047024 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="sbdb" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047030 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="sbdb" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.047038 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047044 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.047051 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="nbdb" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047057 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="nbdb" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.047065 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovn-acl-logging" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047071 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovn-acl-logging" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.047081 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovn-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047088 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovn-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047166 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="northd" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047174 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kube-rbac-proxy-node" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047183 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047189 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="sbdb" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047199 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047207 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovn-acl-logging" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047213 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047219 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047227 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="nbdb" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047233 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovn-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047243 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.047321 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047328 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.047339 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047344 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.047424 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerName="ovnkube-controller" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.048814 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.060091 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovnkube-controller/3.log" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.063632 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovn-acl-logging/0.log" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.064113 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-52bx6_9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/ovn-controller/0.log" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065259 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468" exitCode=0 Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065288 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3" exitCode=0 Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065301 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41" exitCode=0 Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065310 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d" exitCode=0 Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065320 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1" exitCode=0 Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065329 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad" exitCode=0 Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065338 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89" exitCode=143 Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065347 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" containerID="805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027" exitCode=143 Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065365 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065380 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065424 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065448 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065468 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065488 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065507 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065540 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065558 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065569 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065580 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065591 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065602 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065613 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065624 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065635 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065650 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065666 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065679 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065690 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065700 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065711 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065721 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065731 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065742 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065752 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065763 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065778 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065793 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065806 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065817 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065828 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065838 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065849 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065859 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065931 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065944 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065954 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065970 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-52bx6" event={"ID":"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697","Type":"ContainerDied","Data":"f92ce00341c3592a1e69a9c9ea80984ccaf07e2622c309e76def046947e2a523"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.065986 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066000 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066011 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066021 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066033 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066043 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066054 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066065 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066076 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066086 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.066110 4880 scope.go:117] "RemoveContainer" containerID="06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.076849 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/2.log" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.077638 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/1.log" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.077705 4880 generic.go:334] "Generic (PLEG): container finished" podID="6366d207-93fa-4b9f-ae70-0bab0b293db3" containerID="2ad7d3e3ac06f8f38927fe3579d053bcdf1b0eb2b14c23f65e4968eb708a8a38" exitCode=2 Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.077765 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5znrt" event={"ID":"6366d207-93fa-4b9f-ae70-0bab0b293db3","Type":"ContainerDied","Data":"2ad7d3e3ac06f8f38927fe3579d053bcdf1b0eb2b14c23f65e4968eb708a8a38"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.077799 4880 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767"} Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.078358 4880 scope.go:117] "RemoveContainer" containerID="2ad7d3e3ac06f8f38927fe3579d053bcdf1b0eb2b14c23f65e4968eb708a8a38" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.078637 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5znrt_openshift-multus(6366d207-93fa-4b9f-ae70-0bab0b293db3)\"" pod="openshift-multus/multus-5znrt" podUID="6366d207-93fa-4b9f-ae70-0bab0b293db3" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.091458 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-openvswitch\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.091671 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.091714 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-env-overrides\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.091838 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-netd\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092004 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092079 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092103 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-systemd\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092145 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092185 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092227 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-var-lib-openvswitch\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092251 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-node-log\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092270 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-netns\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092414 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-slash\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092444 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-slash" (OuterVolumeSpecName: "host-slash") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092575 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092602 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-node-log" (OuterVolumeSpecName: "node-log") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092650 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-kubelet\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092726 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092767 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-ovn-kubernetes\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092699 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.092901 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093308 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-script-lib\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093336 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-ovn\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093378 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-etc-openvswitch\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093407 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovn-node-metrics-cert\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093452 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvnjs\" (UniqueName: \"kubernetes.io/projected/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-kube-api-access-wvnjs\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093476 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-config\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093503 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-bin\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093547 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-log-socket\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093566 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-systemd-units\") pod \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\" (UID: \"9e4d730b-5ca7-46cf-a62a-3c4a54bc1697\") " Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093670 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-run-ovn-kubernetes\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093728 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-run-netns\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093781 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-systemd-units\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093812 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-slash\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.093882 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-cni-netd\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094025 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-log-socket" (OuterVolumeSpecName: "log-socket") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094219 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-log-socket\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094265 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-run-ovn\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094312 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b341575-4f63-4b9a-a5a8-f44b0861b38b-env-overrides\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094345 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-kubelet\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094359 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094382 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094404 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094403 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b341575-4f63-4b9a-a5a8-f44b0861b38b-ovn-node-metrics-cert\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094459 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094484 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-cni-bin\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094512 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b341575-4f63-4b9a-a5a8-f44b0861b38b-ovnkube-script-lib\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094540 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvlq\" (UniqueName: \"kubernetes.io/projected/0b341575-4f63-4b9a-a5a8-f44b0861b38b-kube-api-access-5mvlq\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094557 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-etc-openvswitch\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094574 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-run-systemd\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.094584 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.095557 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.095719 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-node-log\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.095844 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-var-lib-openvswitch\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.096354 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100031 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-run-openvswitch\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100086 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b341575-4f63-4b9a-a5a8-f44b0861b38b-ovnkube-config\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100251 4880 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100279 4880 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100279 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100315 4880 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-log-socket\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100333 4880 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100368 4880 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100380 4880 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100393 4880 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100406 4880 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100439 4880 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100460 4880 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-node-log\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100476 4880 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100494 4880 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-slash\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100510 4880 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100530 4880 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100549 4880 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100566 4880 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.100583 4880 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.103821 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-kube-api-access-wvnjs" (OuterVolumeSpecName: "kube-api-access-wvnjs") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "kube-api-access-wvnjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.106228 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.117301 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" (UID: "9e4d730b-5ca7-46cf-a62a-3c4a54bc1697"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.131365 4880 scope.go:117] "RemoveContainer" containerID="4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.154389 4880 scope.go:117] "RemoveContainer" containerID="75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.168045 4880 scope.go:117] "RemoveContainer" containerID="2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.183393 4880 scope.go:117] "RemoveContainer" containerID="ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.195518 4880 scope.go:117] "RemoveContainer" containerID="df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201470 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-run-ovn-kubernetes\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201513 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-run-netns\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201542 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-systemd-units\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201565 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-slash\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201608 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-cni-netd\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201613 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-run-ovn-kubernetes\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201633 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-log-socket\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201679 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-log-socket\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201677 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-run-netns\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201705 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-run-ovn\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201735 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-cni-netd\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201740 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b341575-4f63-4b9a-a5a8-f44b0861b38b-env-overrides\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201768 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-run-ovn\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201773 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-systemd-units\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201800 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-kubelet\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201717 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-slash\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201777 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-kubelet\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201912 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b341575-4f63-4b9a-a5a8-f44b0861b38b-ovn-node-metrics-cert\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.201950 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202150 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-cni-bin\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202215 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b341575-4f63-4b9a-a5a8-f44b0861b38b-ovnkube-script-lib\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202251 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mvlq\" (UniqueName: \"kubernetes.io/projected/0b341575-4f63-4b9a-a5a8-f44b0861b38b-kube-api-access-5mvlq\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202298 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-etc-openvswitch\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202306 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b341575-4f63-4b9a-a5a8-f44b0861b38b-env-overrides\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202324 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-run-systemd\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202354 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-cni-bin\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202397 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202426 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-etc-openvswitch\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.202996 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b341575-4f63-4b9a-a5a8-f44b0861b38b-ovnkube-script-lib\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.203056 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-run-systemd\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.203094 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-node-log\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.203986 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-var-lib-openvswitch\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.204023 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-run-openvswitch\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.204046 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b341575-4f63-4b9a-a5a8-f44b0861b38b-ovnkube-config\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.204129 4880 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.204145 4880 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.204158 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvnjs\" (UniqueName: \"kubernetes.io/projected/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697-kube-api-access-wvnjs\") on node \"crc\" DevicePath \"\"" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.204464 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-var-lib-openvswitch\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.203117 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-node-log\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.204532 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b341575-4f63-4b9a-a5a8-f44b0861b38b-run-openvswitch\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.204816 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b341575-4f63-4b9a-a5a8-f44b0861b38b-ovnkube-config\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.206322 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b341575-4f63-4b9a-a5a8-f44b0861b38b-ovn-node-metrics-cert\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.211797 4880 scope.go:117] "RemoveContainer" containerID="8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.220634 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mvlq\" (UniqueName: \"kubernetes.io/projected/0b341575-4f63-4b9a-a5a8-f44b0861b38b-kube-api-access-5mvlq\") pod \"ovnkube-node-h6xdx\" (UID: \"0b341575-4f63-4b9a-a5a8-f44b0861b38b\") " pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.224819 4880 scope.go:117] "RemoveContainer" containerID="805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.240545 4880 scope.go:117] "RemoveContainer" containerID="7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.258303 4880 scope.go:117] "RemoveContainer" containerID="06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.259004 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": container with ID starting with 06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468 not found: ID does not exist" containerID="06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.259063 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} err="failed to get container status \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": rpc error: code = NotFound desc = could not find container \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": container with ID starting with 06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.259102 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.259469 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\": container with ID starting with eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4 not found: ID does not exist" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.259492 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} err="failed to get container status \"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\": rpc error: code = NotFound desc = could not find container \"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\": container with ID starting with eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.259511 4880 scope.go:117] "RemoveContainer" containerID="4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.259845 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\": container with ID starting with 4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3 not found: ID does not exist" containerID="4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.259958 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} err="failed to get container status \"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\": rpc error: code = NotFound desc = could not find container \"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\": container with ID starting with 4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.260099 4880 scope.go:117] "RemoveContainer" containerID="75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.260914 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\": container with ID starting with 75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41 not found: ID does not exist" containerID="75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.260954 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} err="failed to get container status \"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\": rpc error: code = NotFound desc = could not find container \"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\": container with ID starting with 75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.261027 4880 scope.go:117] "RemoveContainer" containerID="2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.261578 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\": container with ID starting with 2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d not found: ID does not exist" containerID="2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.261629 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} err="failed to get container status \"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\": rpc error: code = NotFound desc = could not find container \"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\": container with ID starting with 2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.261646 4880 scope.go:117] "RemoveContainer" containerID="ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.262193 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\": container with ID starting with ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1 not found: ID does not exist" containerID="ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.262250 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} err="failed to get container status \"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\": rpc error: code = NotFound desc = could not find container \"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\": container with ID starting with ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.262282 4880 scope.go:117] "RemoveContainer" containerID="df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.262787 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\": container with ID starting with df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad not found: ID does not exist" containerID="df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.262830 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} err="failed to get container status \"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\": rpc error: code = NotFound desc = could not find container \"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\": container with ID starting with df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.262857 4880 scope.go:117] "RemoveContainer" containerID="8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.263565 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\": container with ID starting with 8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89 not found: ID does not exist" containerID="8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.263604 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} err="failed to get container status \"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\": rpc error: code = NotFound desc = could not find container \"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\": container with ID starting with 8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.263647 4880 scope.go:117] "RemoveContainer" containerID="805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.264044 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\": container with ID starting with 805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027 not found: ID does not exist" containerID="805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.264089 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} err="failed to get container status \"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\": rpc error: code = NotFound desc = could not find container \"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\": container with ID starting with 805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.264118 4880 scope.go:117] "RemoveContainer" containerID="7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739" Dec 01 03:06:28 crc kubenswrapper[4880]: E1201 03:06:28.264688 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\": container with ID starting with 7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739 not found: ID does not exist" containerID="7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.264742 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739"} err="failed to get container status \"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\": rpc error: code = NotFound desc = could not find container \"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\": container with ID starting with 7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.264760 4880 scope.go:117] "RemoveContainer" containerID="06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.265269 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} err="failed to get container status \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": rpc error: code = NotFound desc = could not find container \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": container with ID starting with 06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.265315 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.265752 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} err="failed to get container status \"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\": rpc error: code = NotFound desc = could not find container \"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\": container with ID starting with eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.265782 4880 scope.go:117] "RemoveContainer" containerID="4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.266199 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} err="failed to get container status \"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\": rpc error: code = NotFound desc = could not find container \"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\": container with ID starting with 4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.266245 4880 scope.go:117] "RemoveContainer" containerID="75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.266664 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} err="failed to get container status \"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\": rpc error: code = NotFound desc = could not find container \"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\": container with ID starting with 75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.266715 4880 scope.go:117] "RemoveContainer" containerID="2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.267156 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} err="failed to get container status \"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\": rpc error: code = NotFound desc = could not find container \"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\": container with ID starting with 2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.267197 4880 scope.go:117] "RemoveContainer" containerID="ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.267615 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} err="failed to get container status \"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\": rpc error: code = NotFound desc = could not find container \"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\": container with ID starting with ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.267649 4880 scope.go:117] "RemoveContainer" containerID="df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.268096 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} err="failed to get container status \"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\": rpc error: code = NotFound desc = could not find container \"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\": container with ID starting with df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.268138 4880 scope.go:117] "RemoveContainer" containerID="8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.268579 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} err="failed to get container status \"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\": rpc error: code = NotFound desc = could not find container \"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\": container with ID starting with 8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.268613 4880 scope.go:117] "RemoveContainer" containerID="805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.270473 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} err="failed to get container status \"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\": rpc error: code = NotFound desc = could not find container \"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\": container with ID starting with 805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.270545 4880 scope.go:117] "RemoveContainer" containerID="7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.270991 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739"} err="failed to get container status \"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\": rpc error: code = NotFound desc = could not find container \"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\": container with ID starting with 7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.271030 4880 scope.go:117] "RemoveContainer" containerID="06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.271554 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} err="failed to get container status \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": rpc error: code = NotFound desc = could not find container \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": container with ID starting with 06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.271603 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.272153 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} err="failed to get container status \"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\": rpc error: code = NotFound desc = could not find container \"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\": container with ID starting with eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.272207 4880 scope.go:117] "RemoveContainer" containerID="4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.272703 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} err="failed to get container status \"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\": rpc error: code = NotFound desc = could not find container \"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\": container with ID starting with 4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.272757 4880 scope.go:117] "RemoveContainer" containerID="75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.273229 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} err="failed to get container status \"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\": rpc error: code = NotFound desc = could not find container \"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\": container with ID starting with 75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.273276 4880 scope.go:117] "RemoveContainer" containerID="2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.273802 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} err="failed to get container status \"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\": rpc error: code = NotFound desc = could not find container \"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\": container with ID starting with 2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.273839 4880 scope.go:117] "RemoveContainer" containerID="ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.274359 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} err="failed to get container status \"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\": rpc error: code = NotFound desc = could not find container \"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\": container with ID starting with ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.274408 4880 scope.go:117] "RemoveContainer" containerID="df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.274836 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} err="failed to get container status \"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\": rpc error: code = NotFound desc = could not find container \"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\": container with ID starting with df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.275327 4880 scope.go:117] "RemoveContainer" containerID="8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.275956 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} err="failed to get container status \"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\": rpc error: code = NotFound desc = could not find container \"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\": container with ID starting with 8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.276048 4880 scope.go:117] "RemoveContainer" containerID="805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.276562 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} err="failed to get container status \"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\": rpc error: code = NotFound desc = could not find container \"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\": container with ID starting with 805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.276621 4880 scope.go:117] "RemoveContainer" containerID="7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.277115 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739"} err="failed to get container status \"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\": rpc error: code = NotFound desc = could not find container \"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\": container with ID starting with 7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.277169 4880 scope.go:117] "RemoveContainer" containerID="06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.277945 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} err="failed to get container status \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": rpc error: code = NotFound desc = could not find container \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": container with ID starting with 06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.277990 4880 scope.go:117] "RemoveContainer" containerID="eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.278594 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4"} err="failed to get container status \"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\": rpc error: code = NotFound desc = could not find container \"eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4\": container with ID starting with eb18e5acff45f705de626c0602b4537128dbc42c33da4cf37b8eb3216cea81e4 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.278638 4880 scope.go:117] "RemoveContainer" containerID="4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.280068 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3"} err="failed to get container status \"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\": rpc error: code = NotFound desc = could not find container \"4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3\": container with ID starting with 4ee623062f729f321e5ec34b2818d5d8a6fc4e595bde5718ced3102a607127e3 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.280109 4880 scope.go:117] "RemoveContainer" containerID="75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.280673 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41"} err="failed to get container status \"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\": rpc error: code = NotFound desc = could not find container \"75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41\": container with ID starting with 75af41f2a7f520bcad76851ed4b44106b3546a1c56f630480216e876c0269d41 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.280713 4880 scope.go:117] "RemoveContainer" containerID="2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.281320 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d"} err="failed to get container status \"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\": rpc error: code = NotFound desc = could not find container \"2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d\": container with ID starting with 2d3ee26cffc9d31157ecd9bab8a7fd328207b7b506848180a28713d2e2f8fe1d not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.281361 4880 scope.go:117] "RemoveContainer" containerID="ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.281846 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1"} err="failed to get container status \"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\": rpc error: code = NotFound desc = could not find container \"ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1\": container with ID starting with ba5bad3c486003d6a59c49f1cfe6b322ff1035d995279d2c55133e67de4451d1 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.281936 4880 scope.go:117] "RemoveContainer" containerID="df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.282446 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad"} err="failed to get container status \"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\": rpc error: code = NotFound desc = could not find container \"df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad\": container with ID starting with df392c7de0505b65d6cda5e5aa69bbe89d09b4a9a4faf084f0c6417f14f077ad not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.282481 4880 scope.go:117] "RemoveContainer" containerID="8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.282946 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89"} err="failed to get container status \"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\": rpc error: code = NotFound desc = could not find container \"8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89\": container with ID starting with 8efb5fc897dab8e9aeff068741bbaa3abbbf0bfe0ef61b053ed7c02b211fff89 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.282980 4880 scope.go:117] "RemoveContainer" containerID="805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.283449 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027"} err="failed to get container status \"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\": rpc error: code = NotFound desc = could not find container \"805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027\": container with ID starting with 805dc9c2864447e9ebb4b9896328d59e5b3e8d36a92c0365fbe72a0d498fb027 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.283493 4880 scope.go:117] "RemoveContainer" containerID="7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.283943 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739"} err="failed to get container status \"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\": rpc error: code = NotFound desc = could not find container \"7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739\": container with ID starting with 7c6d6f4401d23f3b40be5ae65d947e8c54e7bd6e3b36e97f54512c5c19fe1739 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.283977 4880 scope.go:117] "RemoveContainer" containerID="06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.284404 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468"} err="failed to get container status \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": rpc error: code = NotFound desc = could not find container \"06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468\": container with ID starting with 06b198cf725057d0cc9aea1be06803164e5b33bdc343ae48abe4de389facc468 not found: ID does not exist" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.370063 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.465623 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-52bx6"] Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.469474 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-52bx6"] Dec 01 03:06:28 crc kubenswrapper[4880]: I1201 03:06:28.792701 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e4d730b-5ca7-46cf-a62a-3c4a54bc1697" path="/var/lib/kubelet/pods/9e4d730b-5ca7-46cf-a62a-3c4a54bc1697/volumes" Dec 01 03:06:29 crc kubenswrapper[4880]: I1201 03:06:29.086565 4880 generic.go:334] "Generic (PLEG): container finished" podID="0b341575-4f63-4b9a-a5a8-f44b0861b38b" containerID="67a0368ba84a843f3783223b0102db0c382a60ad880195d955e63fa932b0f784" exitCode=0 Dec 01 03:06:29 crc kubenswrapper[4880]: I1201 03:06:29.086624 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerDied","Data":"67a0368ba84a843f3783223b0102db0c382a60ad880195d955e63fa932b0f784"} Dec 01 03:06:29 crc kubenswrapper[4880]: I1201 03:06:29.086671 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerStarted","Data":"7f7dc4c51e1753e51561b04af0bfd7f5ef8bedfd8eefe393ada21eda788ec996"} Dec 01 03:06:30 crc kubenswrapper[4880]: I1201 03:06:30.099514 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerStarted","Data":"2eb0c4a2f065818a7b57d59487b7b7c7be295b384d48b18d7b4a20548f9646f8"} Dec 01 03:06:30 crc kubenswrapper[4880]: I1201 03:06:30.099861 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerStarted","Data":"27be4375465fdfee56b66c01d592aa6ed9b904e3ef7e3bfdd3b2b050c6c6532c"} Dec 01 03:06:30 crc kubenswrapper[4880]: I1201 03:06:30.099921 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerStarted","Data":"cd2e7be6aab712a36097ac5ec30d516deb419039ecbaaf52b036125440968e29"} Dec 01 03:06:30 crc kubenswrapper[4880]: I1201 03:06:30.099939 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerStarted","Data":"0fadf411a1c0a92812f8f36a7e975136725074cf13c69a89be8631fe76d49864"} Dec 01 03:06:30 crc kubenswrapper[4880]: I1201 03:06:30.099956 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerStarted","Data":"f0ee7a6d2e28f05b3128ae016ae3b6be0611c67e60b4f6d0056c96b1a8caf370"} Dec 01 03:06:30 crc kubenswrapper[4880]: I1201 03:06:30.099972 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerStarted","Data":"b2c3f93305a808ca7aa98189fe99f35c3996247436d2e605a756b9e29e8aeee2"} Dec 01 03:06:33 crc kubenswrapper[4880]: I1201 03:06:33.126136 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerStarted","Data":"13c7e116888e101e441e4de7204a69b20b1f8fe1ebe288c6bf55ad7985c050ef"} Dec 01 03:06:35 crc kubenswrapper[4880]: I1201 03:06:35.143735 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" event={"ID":"0b341575-4f63-4b9a-a5a8-f44b0861b38b","Type":"ContainerStarted","Data":"a87a3263eabb79bc5fff442c476e7b75e993c1f86824dca23c24921805ae2145"} Dec 01 03:06:35 crc kubenswrapper[4880]: I1201 03:06:35.144314 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:35 crc kubenswrapper[4880]: I1201 03:06:35.144370 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:35 crc kubenswrapper[4880]: I1201 03:06:35.144389 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:35 crc kubenswrapper[4880]: I1201 03:06:35.175692 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:35 crc kubenswrapper[4880]: I1201 03:06:35.186555 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" podStartSLOduration=7.18653138 podStartE2EDuration="7.18653138s" podCreationTimestamp="2025-12-01 03:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:06:35.180496434 +0000 UTC m=+624.691750816" watchObservedRunningTime="2025-12-01 03:06:35.18653138 +0000 UTC m=+624.697785782" Dec 01 03:06:35 crc kubenswrapper[4880]: I1201 03:06:35.200636 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:06:41 crc kubenswrapper[4880]: I1201 03:06:41.783752 4880 scope.go:117] "RemoveContainer" containerID="2ad7d3e3ac06f8f38927fe3579d053bcdf1b0eb2b14c23f65e4968eb708a8a38" Dec 01 03:06:41 crc kubenswrapper[4880]: E1201 03:06:41.784324 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5znrt_openshift-multus(6366d207-93fa-4b9f-ae70-0bab0b293db3)\"" pod="openshift-multus/multus-5znrt" podUID="6366d207-93fa-4b9f-ae70-0bab0b293db3" Dec 01 03:06:52 crc kubenswrapper[4880]: I1201 03:06:52.784428 4880 scope.go:117] "RemoveContainer" containerID="2ad7d3e3ac06f8f38927fe3579d053bcdf1b0eb2b14c23f65e4968eb708a8a38" Dec 01 03:06:53 crc kubenswrapper[4880]: I1201 03:06:53.266362 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/2.log" Dec 01 03:06:53 crc kubenswrapper[4880]: I1201 03:06:53.267317 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/1.log" Dec 01 03:06:53 crc kubenswrapper[4880]: I1201 03:06:53.267380 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5znrt" event={"ID":"6366d207-93fa-4b9f-ae70-0bab0b293db3","Type":"ContainerStarted","Data":"3c83276d120ef314c3be16ee5782fa673d9997e811de52dc9c874bc6ff34f5d3"} Dec 01 03:06:58 crc kubenswrapper[4880]: I1201 03:06:58.407391 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h6xdx" Dec 01 03:07:11 crc kubenswrapper[4880]: I1201 03:07:11.069438 4880 scope.go:117] "RemoveContainer" containerID="9caf57949ce823db33e7b95d40ce5a11119319e2804bb6fa2b958fd0e2487767" Dec 01 03:07:11 crc kubenswrapper[4880]: I1201 03:07:11.382625 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5znrt_6366d207-93fa-4b9f-ae70-0bab0b293db3/kube-multus/2.log" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.504657 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq"] Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.506417 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.509573 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.533560 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq"] Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.639223 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.639292 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.639358 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8wb\" (UniqueName: \"kubernetes.io/projected/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-kube-api-access-vn8wb\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.741046 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.741124 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.741213 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8wb\" (UniqueName: \"kubernetes.io/projected/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-kube-api-access-vn8wb\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.741821 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.741984 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.777553 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8wb\" (UniqueName: \"kubernetes.io/projected/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-kube-api-access-vn8wb\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:12 crc kubenswrapper[4880]: I1201 03:07:12.861126 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:13 crc kubenswrapper[4880]: I1201 03:07:13.165641 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq"] Dec 01 03:07:13 crc kubenswrapper[4880]: I1201 03:07:13.396341 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" event={"ID":"d72d72bb-38be-4a12-9901-0dc2c58ebd6a","Type":"ContainerStarted","Data":"1cc2585b4fa576b082eecc5f8c12174d33e315b073abf03672ed0d568a127ef9"} Dec 01 03:07:13 crc kubenswrapper[4880]: I1201 03:07:13.396620 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" event={"ID":"d72d72bb-38be-4a12-9901-0dc2c58ebd6a","Type":"ContainerStarted","Data":"30120d7513af4a6286004661559d82d2a95ec78245ee1fb60accf25adb4e6711"} Dec 01 03:07:14 crc kubenswrapper[4880]: I1201 03:07:14.407530 4880 generic.go:334] "Generic (PLEG): container finished" podID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerID="1cc2585b4fa576b082eecc5f8c12174d33e315b073abf03672ed0d568a127ef9" exitCode=0 Dec 01 03:07:14 crc kubenswrapper[4880]: I1201 03:07:14.407616 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" event={"ID":"d72d72bb-38be-4a12-9901-0dc2c58ebd6a","Type":"ContainerDied","Data":"1cc2585b4fa576b082eecc5f8c12174d33e315b073abf03672ed0d568a127ef9"} Dec 01 03:07:17 crc kubenswrapper[4880]: I1201 03:07:17.431208 4880 generic.go:334] "Generic (PLEG): container finished" podID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerID="971a50fe2bfb74a4a668ae98818f1713625cab90c198637eda1d0548e56d0d02" exitCode=0 Dec 01 03:07:17 crc kubenswrapper[4880]: I1201 03:07:17.431313 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" event={"ID":"d72d72bb-38be-4a12-9901-0dc2c58ebd6a","Type":"ContainerDied","Data":"971a50fe2bfb74a4a668ae98818f1713625cab90c198637eda1d0548e56d0d02"} Dec 01 03:07:18 crc kubenswrapper[4880]: I1201 03:07:18.441353 4880 generic.go:334] "Generic (PLEG): container finished" podID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerID="e1d41a061c7144e094374fd98415417c1ada2e0ad66faf2c86378bacd8022aba" exitCode=0 Dec 01 03:07:18 crc kubenswrapper[4880]: I1201 03:07:18.441428 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" event={"ID":"d72d72bb-38be-4a12-9901-0dc2c58ebd6a","Type":"ContainerDied","Data":"e1d41a061c7144e094374fd98415417c1ada2e0ad66faf2c86378bacd8022aba"} Dec 01 03:07:19 crc kubenswrapper[4880]: I1201 03:07:19.785827 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:19 crc kubenswrapper[4880]: I1201 03:07:19.940103 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-util\") pod \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " Dec 01 03:07:19 crc kubenswrapper[4880]: I1201 03:07:19.940183 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn8wb\" (UniqueName: \"kubernetes.io/projected/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-kube-api-access-vn8wb\") pod \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " Dec 01 03:07:19 crc kubenswrapper[4880]: I1201 03:07:19.940202 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-bundle\") pod \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\" (UID: \"d72d72bb-38be-4a12-9901-0dc2c58ebd6a\") " Dec 01 03:07:19 crc kubenswrapper[4880]: I1201 03:07:19.940793 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-bundle" (OuterVolumeSpecName: "bundle") pod "d72d72bb-38be-4a12-9901-0dc2c58ebd6a" (UID: "d72d72bb-38be-4a12-9901-0dc2c58ebd6a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:07:19 crc kubenswrapper[4880]: I1201 03:07:19.948407 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-kube-api-access-vn8wb" (OuterVolumeSpecName: "kube-api-access-vn8wb") pod "d72d72bb-38be-4a12-9901-0dc2c58ebd6a" (UID: "d72d72bb-38be-4a12-9901-0dc2c58ebd6a"). InnerVolumeSpecName "kube-api-access-vn8wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:07:19 crc kubenswrapper[4880]: I1201 03:07:19.966980 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-util" (OuterVolumeSpecName: "util") pod "d72d72bb-38be-4a12-9901-0dc2c58ebd6a" (UID: "d72d72bb-38be-4a12-9901-0dc2c58ebd6a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:07:20 crc kubenswrapper[4880]: I1201 03:07:20.042157 4880 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-util\") on node \"crc\" DevicePath \"\"" Dec 01 03:07:20 crc kubenswrapper[4880]: I1201 03:07:20.042216 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn8wb\" (UniqueName: \"kubernetes.io/projected/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-kube-api-access-vn8wb\") on node \"crc\" DevicePath \"\"" Dec 01 03:07:20 crc kubenswrapper[4880]: I1201 03:07:20.042241 4880 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d72d72bb-38be-4a12-9901-0dc2c58ebd6a-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:07:20 crc kubenswrapper[4880]: I1201 03:07:20.461848 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" event={"ID":"d72d72bb-38be-4a12-9901-0dc2c58ebd6a","Type":"ContainerDied","Data":"30120d7513af4a6286004661559d82d2a95ec78245ee1fb60accf25adb4e6711"} Dec 01 03:07:20 crc kubenswrapper[4880]: I1201 03:07:20.461972 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30120d7513af4a6286004661559d82d2a95ec78245ee1fb60accf25adb4e6711" Dec 01 03:07:20 crc kubenswrapper[4880]: I1201 03:07:20.462095 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f7l7bq" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.385055 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c"] Dec 01 03:07:21 crc kubenswrapper[4880]: E1201 03:07:21.392275 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerName="pull" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.392312 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerName="pull" Dec 01 03:07:21 crc kubenswrapper[4880]: E1201 03:07:21.392330 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerName="extract" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.392337 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerName="extract" Dec 01 03:07:21 crc kubenswrapper[4880]: E1201 03:07:21.392345 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerName="util" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.392353 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerName="util" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.392569 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72d72bb-38be-4a12-9901-0dc2c58ebd6a" containerName="extract" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.393120 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.397508 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-bz6ml" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.397808 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.397940 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.404352 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c"] Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.463661 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-859xk\" (UniqueName: \"kubernetes.io/projected/ad62334f-d786-4b12-9d4a-e3cd32505605-kube-api-access-859xk\") pod \"nmstate-operator-5b5b58f5c8-nms8c\" (UID: \"ad62334f-d786-4b12-9d4a-e3cd32505605\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.564783 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-859xk\" (UniqueName: \"kubernetes.io/projected/ad62334f-d786-4b12-9d4a-e3cd32505605-kube-api-access-859xk\") pod \"nmstate-operator-5b5b58f5c8-nms8c\" (UID: \"ad62334f-d786-4b12-9d4a-e3cd32505605\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.581570 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-859xk\" (UniqueName: \"kubernetes.io/projected/ad62334f-d786-4b12-9d4a-e3cd32505605-kube-api-access-859xk\") pod \"nmstate-operator-5b5b58f5c8-nms8c\" (UID: \"ad62334f-d786-4b12-9d4a-e3cd32505605\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c" Dec 01 03:07:21 crc kubenswrapper[4880]: I1201 03:07:21.711051 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c" Dec 01 03:07:22 crc kubenswrapper[4880]: I1201 03:07:22.204655 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c"] Dec 01 03:07:22 crc kubenswrapper[4880]: W1201 03:07:22.213048 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad62334f_d786_4b12_9d4a_e3cd32505605.slice/crio-c55ffc6783ecce137c18b5f7f51f72edb3ed6bab26a133b3f3b976c2f368261a WatchSource:0}: Error finding container c55ffc6783ecce137c18b5f7f51f72edb3ed6bab26a133b3f3b976c2f368261a: Status 404 returned error can't find the container with id c55ffc6783ecce137c18b5f7f51f72edb3ed6bab26a133b3f3b976c2f368261a Dec 01 03:07:22 crc kubenswrapper[4880]: I1201 03:07:22.471836 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c" event={"ID":"ad62334f-d786-4b12-9d4a-e3cd32505605","Type":"ContainerStarted","Data":"c55ffc6783ecce137c18b5f7f51f72edb3ed6bab26a133b3f3b976c2f368261a"} Dec 01 03:07:25 crc kubenswrapper[4880]: I1201 03:07:25.492142 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c" event={"ID":"ad62334f-d786-4b12-9d4a-e3cd32505605","Type":"ContainerStarted","Data":"103eb2e827d8ffced91c85f86d68e4c410cc1413bb3e6e18da63a6a916ba3e44"} Dec 01 03:07:25 crc kubenswrapper[4880]: I1201 03:07:25.513548 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-nms8c" podStartSLOduration=2.243230452 podStartE2EDuration="4.513524082s" podCreationTimestamp="2025-12-01 03:07:21 +0000 UTC" firstStartedPulling="2025-12-01 03:07:22.214693501 +0000 UTC m=+671.725947873" lastFinishedPulling="2025-12-01 03:07:24.484987121 +0000 UTC m=+673.996241503" observedRunningTime="2025-12-01 03:07:25.510424373 +0000 UTC m=+675.021678785" watchObservedRunningTime="2025-12-01 03:07:25.513524082 +0000 UTC m=+675.024778484" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.714519 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb"] Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.716106 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.718060 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-kjt6s" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.726186 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc"] Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.727218 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.731655 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.736036 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb"] Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.749326 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc"] Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.768856 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-pflf4"] Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.769484 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.821421 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxtw\" (UniqueName: \"kubernetes.io/projected/e9f2ceff-591f-46fa-ac81-5ed07767e71c-kube-api-access-mvxtw\") pod \"nmstate-metrics-7f946cbc9-4mfpb\" (UID: \"e9f2ceff-591f-46fa-ac81-5ed07767e71c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.821484 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-624rm\" (UniqueName: \"kubernetes.io/projected/947f3093-d07c-43eb-9cdc-dad9fc4df1d1-kube-api-access-624rm\") pod \"nmstate-webhook-5f6d4c5ccb-8w4sc\" (UID: \"947f3093-d07c-43eb-9cdc-dad9fc4df1d1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.821510 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/947f3093-d07c-43eb-9cdc-dad9fc4df1d1-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8w4sc\" (UID: \"947f3093-d07c-43eb-9cdc-dad9fc4df1d1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.884234 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz"] Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.884817 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.886209 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-sv8k9" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.886295 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.886229 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.895731 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz"] Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.922774 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2b777e58-013b-4d29-9e8d-608cd6b4696b-ovs-socket\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.922815 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n6q5\" (UniqueName: \"kubernetes.io/projected/2b777e58-013b-4d29-9e8d-608cd6b4696b-kube-api-access-8n6q5\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.922839 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvxtw\" (UniqueName: \"kubernetes.io/projected/e9f2ceff-591f-46fa-ac81-5ed07767e71c-kube-api-access-mvxtw\") pod \"nmstate-metrics-7f946cbc9-4mfpb\" (UID: \"e9f2ceff-591f-46fa-ac81-5ed07767e71c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.922890 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-624rm\" (UniqueName: \"kubernetes.io/projected/947f3093-d07c-43eb-9cdc-dad9fc4df1d1-kube-api-access-624rm\") pod \"nmstate-webhook-5f6d4c5ccb-8w4sc\" (UID: \"947f3093-d07c-43eb-9cdc-dad9fc4df1d1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.922912 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/947f3093-d07c-43eb-9cdc-dad9fc4df1d1-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8w4sc\" (UID: \"947f3093-d07c-43eb-9cdc-dad9fc4df1d1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.922930 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2b777e58-013b-4d29-9e8d-608cd6b4696b-dbus-socket\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.922945 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2b777e58-013b-4d29-9e8d-608cd6b4696b-nmstate-lock\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:32 crc kubenswrapper[4880]: E1201 03:07:32.923396 4880 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 01 03:07:32 crc kubenswrapper[4880]: E1201 03:07:32.923438 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/947f3093-d07c-43eb-9cdc-dad9fc4df1d1-tls-key-pair podName:947f3093-d07c-43eb-9cdc-dad9fc4df1d1 nodeName:}" failed. No retries permitted until 2025-12-01 03:07:33.423422296 +0000 UTC m=+682.934676668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/947f3093-d07c-43eb-9cdc-dad9fc4df1d1-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-8w4sc" (UID: "947f3093-d07c-43eb-9cdc-dad9fc4df1d1") : secret "openshift-nmstate-webhook" not found Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.942664 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvxtw\" (UniqueName: \"kubernetes.io/projected/e9f2ceff-591f-46fa-ac81-5ed07767e71c-kube-api-access-mvxtw\") pod \"nmstate-metrics-7f946cbc9-4mfpb\" (UID: \"e9f2ceff-591f-46fa-ac81-5ed07767e71c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb" Dec 01 03:07:32 crc kubenswrapper[4880]: I1201 03:07:32.942671 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-624rm\" (UniqueName: \"kubernetes.io/projected/947f3093-d07c-43eb-9cdc-dad9fc4df1d1-kube-api-access-624rm\") pod \"nmstate-webhook-5f6d4c5ccb-8w4sc\" (UID: \"947f3093-d07c-43eb-9cdc-dad9fc4df1d1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.024107 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d768g\" (UniqueName: \"kubernetes.io/projected/16100fc3-0ac8-43ea-a48b-2c6a132758d3-kube-api-access-d768g\") pod \"nmstate-console-plugin-7fbb5f6569-c8qkz\" (UID: \"16100fc3-0ac8-43ea-a48b-2c6a132758d3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.024384 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2b777e58-013b-4d29-9e8d-608cd6b4696b-ovs-socket\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.024463 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n6q5\" (UniqueName: \"kubernetes.io/projected/2b777e58-013b-4d29-9e8d-608cd6b4696b-kube-api-access-8n6q5\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.024562 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/16100fc3-0ac8-43ea-a48b-2c6a132758d3-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-c8qkz\" (UID: \"16100fc3-0ac8-43ea-a48b-2c6a132758d3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.024642 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/16100fc3-0ac8-43ea-a48b-2c6a132758d3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-c8qkz\" (UID: \"16100fc3-0ac8-43ea-a48b-2c6a132758d3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.025055 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2b777e58-013b-4d29-9e8d-608cd6b4696b-dbus-socket\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.025137 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2b777e58-013b-4d29-9e8d-608cd6b4696b-nmstate-lock\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.025245 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2b777e58-013b-4d29-9e8d-608cd6b4696b-nmstate-lock\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.025345 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2b777e58-013b-4d29-9e8d-608cd6b4696b-ovs-socket\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.025887 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2b777e58-013b-4d29-9e8d-608cd6b4696b-dbus-socket\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.034242 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.056008 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n6q5\" (UniqueName: \"kubernetes.io/projected/2b777e58-013b-4d29-9e8d-608cd6b4696b-kube-api-access-8n6q5\") pod \"nmstate-handler-pflf4\" (UID: \"2b777e58-013b-4d29-9e8d-608cd6b4696b\") " pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.064755 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-9cffd6b86-4pkmh"] Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.065536 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.082992 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9cffd6b86-4pkmh"] Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.084254 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.126631 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/16100fc3-0ac8-43ea-a48b-2c6a132758d3-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-c8qkz\" (UID: \"16100fc3-0ac8-43ea-a48b-2c6a132758d3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.126678 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/16100fc3-0ac8-43ea-a48b-2c6a132758d3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-c8qkz\" (UID: \"16100fc3-0ac8-43ea-a48b-2c6a132758d3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.126739 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d768g\" (UniqueName: \"kubernetes.io/projected/16100fc3-0ac8-43ea-a48b-2c6a132758d3-kube-api-access-d768g\") pod \"nmstate-console-plugin-7fbb5f6569-c8qkz\" (UID: \"16100fc3-0ac8-43ea-a48b-2c6a132758d3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.127820 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/16100fc3-0ac8-43ea-a48b-2c6a132758d3-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-c8qkz\" (UID: \"16100fc3-0ac8-43ea-a48b-2c6a132758d3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.130250 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/16100fc3-0ac8-43ea-a48b-2c6a132758d3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-c8qkz\" (UID: \"16100fc3-0ac8-43ea-a48b-2c6a132758d3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.148563 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d768g\" (UniqueName: \"kubernetes.io/projected/16100fc3-0ac8-43ea-a48b-2c6a132758d3-kube-api-access-d768g\") pod \"nmstate-console-plugin-7fbb5f6569-c8qkz\" (UID: \"16100fc3-0ac8-43ea-a48b-2c6a132758d3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.196641 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.228193 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6ssg\" (UniqueName: \"kubernetes.io/projected/4aa1665c-1343-46c1-9098-ca8dab2963f7-kube-api-access-b6ssg\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.228231 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-console-config\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.228258 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa1665c-1343-46c1-9098-ca8dab2963f7-console-serving-cert\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.228299 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-service-ca\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.228339 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-oauth-serving-cert\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.228357 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4aa1665c-1343-46c1-9098-ca8dab2963f7-console-oauth-config\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.228375 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-trusted-ca-bundle\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.274479 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb"] Dec 01 03:07:33 crc kubenswrapper[4880]: W1201 03:07:33.284616 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9f2ceff_591f_46fa_ac81_5ed07767e71c.slice/crio-728cff5c820c86ba4b416b2fb48dfd132f4863730615cb121f2d2390f9085fb7 WatchSource:0}: Error finding container 728cff5c820c86ba4b416b2fb48dfd132f4863730615cb121f2d2390f9085fb7: Status 404 returned error can't find the container with id 728cff5c820c86ba4b416b2fb48dfd132f4863730615cb121f2d2390f9085fb7 Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.329644 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-service-ca\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.329705 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-oauth-serving-cert\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.329724 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4aa1665c-1343-46c1-9098-ca8dab2963f7-console-oauth-config\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.329740 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-trusted-ca-bundle\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.330649 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-service-ca\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.330714 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6ssg\" (UniqueName: \"kubernetes.io/projected/4aa1665c-1343-46c1-9098-ca8dab2963f7-kube-api-access-b6ssg\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.330993 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-console-config\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.331025 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa1665c-1343-46c1-9098-ca8dab2963f7-console-serving-cert\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.331576 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-console-config\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.331827 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-oauth-serving-cert\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.332961 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa1665c-1343-46c1-9098-ca8dab2963f7-trusted-ca-bundle\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.334407 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa1665c-1343-46c1-9098-ca8dab2963f7-console-serving-cert\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.338158 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4aa1665c-1343-46c1-9098-ca8dab2963f7-console-oauth-config\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.344382 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6ssg\" (UniqueName: \"kubernetes.io/projected/4aa1665c-1343-46c1-9098-ca8dab2963f7-kube-api-access-b6ssg\") pod \"console-9cffd6b86-4pkmh\" (UID: \"4aa1665c-1343-46c1-9098-ca8dab2963f7\") " pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.431888 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/947f3093-d07c-43eb-9cdc-dad9fc4df1d1-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8w4sc\" (UID: \"947f3093-d07c-43eb-9cdc-dad9fc4df1d1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.432770 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.435072 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/947f3093-d07c-43eb-9cdc-dad9fc4df1d1-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8w4sc\" (UID: \"947f3093-d07c-43eb-9cdc-dad9fc4df1d1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.541240 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb" event={"ID":"e9f2ceff-591f-46fa-ac81-5ed07767e71c","Type":"ContainerStarted","Data":"728cff5c820c86ba4b416b2fb48dfd132f4863730615cb121f2d2390f9085fb7"} Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.542431 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pflf4" event={"ID":"2b777e58-013b-4d29-9e8d-608cd6b4696b","Type":"ContainerStarted","Data":"4fe9ff6befc0dea6524acacd07d98f32fc57102263f9ab7e245708abcfd9f614"} Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.586469 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz"] Dec 01 03:07:33 crc kubenswrapper[4880]: W1201 03:07:33.591919 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16100fc3_0ac8_43ea_a48b_2c6a132758d3.slice/crio-433e60aad758566aaea73da53b50052164fe6224c07d93b6fc8b2c4f2dffbece WatchSource:0}: Error finding container 433e60aad758566aaea73da53b50052164fe6224c07d93b6fc8b2c4f2dffbece: Status 404 returned error can't find the container with id 433e60aad758566aaea73da53b50052164fe6224c07d93b6fc8b2c4f2dffbece Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.613850 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9cffd6b86-4pkmh"] Dec 01 03:07:33 crc kubenswrapper[4880]: W1201 03:07:33.619160 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aa1665c_1343_46c1_9098_ca8dab2963f7.slice/crio-860fdac569b1c17bc2d650b7b3105b04f6a89749b468942153bff591937f194c WatchSource:0}: Error finding container 860fdac569b1c17bc2d650b7b3105b04f6a89749b468942153bff591937f194c: Status 404 returned error can't find the container with id 860fdac569b1c17bc2d650b7b3105b04f6a89749b468942153bff591937f194c Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.644752 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:33 crc kubenswrapper[4880]: I1201 03:07:33.818540 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc"] Dec 01 03:07:33 crc kubenswrapper[4880]: W1201 03:07:33.824612 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947f3093_d07c_43eb_9cdc_dad9fc4df1d1.slice/crio-04e699a54e8adfdd7288dab8bf7059e0ed0a2780e6d6670d35f6a5c0ff7e7dc5 WatchSource:0}: Error finding container 04e699a54e8adfdd7288dab8bf7059e0ed0a2780e6d6670d35f6a5c0ff7e7dc5: Status 404 returned error can't find the container with id 04e699a54e8adfdd7288dab8bf7059e0ed0a2780e6d6670d35f6a5c0ff7e7dc5 Dec 01 03:07:34 crc kubenswrapper[4880]: I1201 03:07:34.549718 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" event={"ID":"947f3093-d07c-43eb-9cdc-dad9fc4df1d1","Type":"ContainerStarted","Data":"04e699a54e8adfdd7288dab8bf7059e0ed0a2780e6d6670d35f6a5c0ff7e7dc5"} Dec 01 03:07:34 crc kubenswrapper[4880]: I1201 03:07:34.551356 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" event={"ID":"16100fc3-0ac8-43ea-a48b-2c6a132758d3","Type":"ContainerStarted","Data":"433e60aad758566aaea73da53b50052164fe6224c07d93b6fc8b2c4f2dffbece"} Dec 01 03:07:34 crc kubenswrapper[4880]: I1201 03:07:34.553167 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9cffd6b86-4pkmh" event={"ID":"4aa1665c-1343-46c1-9098-ca8dab2963f7","Type":"ContainerStarted","Data":"790ca61222254eb8a99b78daf003e8ee5e66dc0a0bd193dc14e2f05bdbf9c86f"} Dec 01 03:07:34 crc kubenswrapper[4880]: I1201 03:07:34.553246 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9cffd6b86-4pkmh" event={"ID":"4aa1665c-1343-46c1-9098-ca8dab2963f7","Type":"ContainerStarted","Data":"860fdac569b1c17bc2d650b7b3105b04f6a89749b468942153bff591937f194c"} Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.574316 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" event={"ID":"947f3093-d07c-43eb-9cdc-dad9fc4df1d1","Type":"ContainerStarted","Data":"ebc44cc4de903063a22978d589637bf23ff36bca043394ee675c69feb38a4582"} Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.575216 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.576779 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" event={"ID":"16100fc3-0ac8-43ea-a48b-2c6a132758d3","Type":"ContainerStarted","Data":"3a1d51361db1ff46fe1c5ef897ffb723f3c0681903b0613b598e7d1037760fb7"} Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.579173 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb" event={"ID":"e9f2ceff-591f-46fa-ac81-5ed07767e71c","Type":"ContainerStarted","Data":"30ce27783c5c1d42dca5fdbd628f6dc00ea06bb96e31d8b25ca805d9babcde63"} Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.582936 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pflf4" event={"ID":"2b777e58-013b-4d29-9e8d-608cd6b4696b","Type":"ContainerStarted","Data":"9e6fb917ac7c5a25b05c52dfaabfac27931b2eb4648aa8ab8cf9f3736ecd0952"} Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.583099 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.602244 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" podStartSLOduration=3.045435446 podStartE2EDuration="5.602223197s" podCreationTimestamp="2025-12-01 03:07:32 +0000 UTC" firstStartedPulling="2025-12-01 03:07:33.826576329 +0000 UTC m=+683.337830691" lastFinishedPulling="2025-12-01 03:07:36.38336407 +0000 UTC m=+685.894618442" observedRunningTime="2025-12-01 03:07:37.600267318 +0000 UTC m=+687.111521720" watchObservedRunningTime="2025-12-01 03:07:37.602223197 +0000 UTC m=+687.113477599" Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.604407 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-9cffd6b86-4pkmh" podStartSLOduration=4.604393072 podStartE2EDuration="4.604393072s" podCreationTimestamp="2025-12-01 03:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:07:34.584131182 +0000 UTC m=+684.095385604" watchObservedRunningTime="2025-12-01 03:07:37.604393072 +0000 UTC m=+687.115647474" Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.669765 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-c8qkz" podStartSLOduration=2.882443753 podStartE2EDuration="5.669749043s" podCreationTimestamp="2025-12-01 03:07:32 +0000 UTC" firstStartedPulling="2025-12-01 03:07:33.595374203 +0000 UTC m=+683.106628575" lastFinishedPulling="2025-12-01 03:07:36.382679493 +0000 UTC m=+685.893933865" observedRunningTime="2025-12-01 03:07:37.669284022 +0000 UTC m=+687.180538404" watchObservedRunningTime="2025-12-01 03:07:37.669749043 +0000 UTC m=+687.181003415" Dec 01 03:07:37 crc kubenswrapper[4880]: I1201 03:07:37.671473 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-pflf4" podStartSLOduration=2.386366356 podStartE2EDuration="5.671466967s" podCreationTimestamp="2025-12-01 03:07:32 +0000 UTC" firstStartedPulling="2025-12-01 03:07:33.132610652 +0000 UTC m=+682.643865024" lastFinishedPulling="2025-12-01 03:07:36.417711263 +0000 UTC m=+685.928965635" observedRunningTime="2025-12-01 03:07:37.633459141 +0000 UTC m=+687.144713513" watchObservedRunningTime="2025-12-01 03:07:37.671466967 +0000 UTC m=+687.182721339" Dec 01 03:07:39 crc kubenswrapper[4880]: I1201 03:07:39.601985 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb" event={"ID":"e9f2ceff-591f-46fa-ac81-5ed07767e71c","Type":"ContainerStarted","Data":"3f762b483c77f2e7163cb66c883ce05d4b752c82e026a69babe0aa1056d472bb"} Dec 01 03:07:39 crc kubenswrapper[4880]: I1201 03:07:39.633114 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-4mfpb" podStartSLOduration=1.838227806 podStartE2EDuration="7.633084552s" podCreationTimestamp="2025-12-01 03:07:32 +0000 UTC" firstStartedPulling="2025-12-01 03:07:33.287227362 +0000 UTC m=+682.798481734" lastFinishedPulling="2025-12-01 03:07:39.082084118 +0000 UTC m=+688.593338480" observedRunningTime="2025-12-01 03:07:39.632061856 +0000 UTC m=+689.143316258" watchObservedRunningTime="2025-12-01 03:07:39.633084552 +0000 UTC m=+689.144338974" Dec 01 03:07:43 crc kubenswrapper[4880]: I1201 03:07:43.122254 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pflf4" Dec 01 03:07:43 crc kubenswrapper[4880]: I1201 03:07:43.433518 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:43 crc kubenswrapper[4880]: I1201 03:07:43.434034 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:43 crc kubenswrapper[4880]: I1201 03:07:43.444395 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:43 crc kubenswrapper[4880]: I1201 03:07:43.643941 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-9cffd6b86-4pkmh" Dec 01 03:07:43 crc kubenswrapper[4880]: I1201 03:07:43.725555 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qcvrn"] Dec 01 03:07:53 crc kubenswrapper[4880]: I1201 03:07:53.654441 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8w4sc" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.570460 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5"] Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.573986 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.575791 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.591157 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5"] Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.644990 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.645108 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.645155 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjt6t\" (UniqueName: \"kubernetes.io/projected/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-kube-api-access-hjt6t\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.746634 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.746710 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjt6t\" (UniqueName: \"kubernetes.io/projected/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-kube-api-access-hjt6t\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.746813 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.747524 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.747990 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.798403 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjt6t\" (UniqueName: \"kubernetes.io/projected/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-kube-api-access-hjt6t\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:07 crc kubenswrapper[4880]: I1201 03:08:07.901510 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:08 crc kubenswrapper[4880]: I1201 03:08:08.152833 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5"] Dec 01 03:08:08 crc kubenswrapper[4880]: I1201 03:08:08.798845 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qcvrn" podUID="747403d3-576b-4621-8cb3-b9122348ec98" containerName="console" containerID="cri-o://51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1" gracePeriod=15 Dec 01 03:08:08 crc kubenswrapper[4880]: I1201 03:08:08.836105 4880 generic.go:334] "Generic (PLEG): container finished" podID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerID="b284071939bcf6bb7f58cae5c81cc502b6b19557dd9f86530fd92277f61af6f6" exitCode=0 Dec 01 03:08:08 crc kubenswrapper[4880]: I1201 03:08:08.836213 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" event={"ID":"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411","Type":"ContainerDied","Data":"b284071939bcf6bb7f58cae5c81cc502b6b19557dd9f86530fd92277f61af6f6"} Dec 01 03:08:08 crc kubenswrapper[4880]: I1201 03:08:08.836302 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" event={"ID":"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411","Type":"ContainerStarted","Data":"911fb1326972fc4b662d53969bd5f62e0f0b73e0f05c425bcf62c08ae612de62"} Dec 01 03:08:08 crc kubenswrapper[4880]: I1201 03:08:08.844555 4880 patch_prober.go:28] interesting pod/console-f9d7485db-qcvrn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 01 03:08:08 crc kubenswrapper[4880]: I1201 03:08:08.844631 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-qcvrn" podUID="747403d3-576b-4621-8cb3-b9122348ec98" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.216567 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qcvrn_747403d3-576b-4621-8cb3-b9122348ec98/console/0.log" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.216937 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.270011 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-oauth-serving-cert\") pod \"747403d3-576b-4621-8cb3-b9122348ec98\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.270072 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-service-ca\") pod \"747403d3-576b-4621-8cb3-b9122348ec98\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.270182 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-serving-cert\") pod \"747403d3-576b-4621-8cb3-b9122348ec98\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.270218 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-oauth-config\") pod \"747403d3-576b-4621-8cb3-b9122348ec98\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.270241 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-trusted-ca-bundle\") pod \"747403d3-576b-4621-8cb3-b9122348ec98\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.270325 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-console-config\") pod \"747403d3-576b-4621-8cb3-b9122348ec98\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.270353 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqtks\" (UniqueName: \"kubernetes.io/projected/747403d3-576b-4621-8cb3-b9122348ec98-kube-api-access-pqtks\") pod \"747403d3-576b-4621-8cb3-b9122348ec98\" (UID: \"747403d3-576b-4621-8cb3-b9122348ec98\") " Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.272562 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "747403d3-576b-4621-8cb3-b9122348ec98" (UID: "747403d3-576b-4621-8cb3-b9122348ec98"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.272675 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "747403d3-576b-4621-8cb3-b9122348ec98" (UID: "747403d3-576b-4621-8cb3-b9122348ec98"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.272838 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-service-ca" (OuterVolumeSpecName: "service-ca") pod "747403d3-576b-4621-8cb3-b9122348ec98" (UID: "747403d3-576b-4621-8cb3-b9122348ec98"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.273427 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-console-config" (OuterVolumeSpecName: "console-config") pod "747403d3-576b-4621-8cb3-b9122348ec98" (UID: "747403d3-576b-4621-8cb3-b9122348ec98"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.280494 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747403d3-576b-4621-8cb3-b9122348ec98-kube-api-access-pqtks" (OuterVolumeSpecName: "kube-api-access-pqtks") pod "747403d3-576b-4621-8cb3-b9122348ec98" (UID: "747403d3-576b-4621-8cb3-b9122348ec98"). InnerVolumeSpecName "kube-api-access-pqtks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.282276 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "747403d3-576b-4621-8cb3-b9122348ec98" (UID: "747403d3-576b-4621-8cb3-b9122348ec98"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.285411 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "747403d3-576b-4621-8cb3-b9122348ec98" (UID: "747403d3-576b-4621-8cb3-b9122348ec98"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.372318 4880 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.372367 4880 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.372388 4880 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/747403d3-576b-4621-8cb3-b9122348ec98-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.372406 4880 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.372425 4880 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-console-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.372443 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqtks\" (UniqueName: \"kubernetes.io/projected/747403d3-576b-4621-8cb3-b9122348ec98-kube-api-access-pqtks\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.372463 4880 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/747403d3-576b-4621-8cb3-b9122348ec98-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.843091 4880 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qcvrn_747403d3-576b-4621-8cb3-b9122348ec98/console/0.log" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.843138 4880 generic.go:334] "Generic (PLEG): container finished" podID="747403d3-576b-4621-8cb3-b9122348ec98" containerID="51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1" exitCode=2 Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.843164 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qcvrn" event={"ID":"747403d3-576b-4621-8cb3-b9122348ec98","Type":"ContainerDied","Data":"51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1"} Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.843193 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qcvrn" event={"ID":"747403d3-576b-4621-8cb3-b9122348ec98","Type":"ContainerDied","Data":"06b1a4c70d46ee8fda7d107caf63dee891daab163f797a7ab3df15ef027a69cf"} Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.843215 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qcvrn" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.843220 4880 scope.go:117] "RemoveContainer" containerID="51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.859059 4880 scope.go:117] "RemoveContainer" containerID="51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1" Dec 01 03:08:09 crc kubenswrapper[4880]: E1201 03:08:09.860323 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1\": container with ID starting with 51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1 not found: ID does not exist" containerID="51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.860542 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1"} err="failed to get container status \"51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1\": rpc error: code = NotFound desc = could not find container \"51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1\": container with ID starting with 51e9a2d4054dec20f57d26a744c92ae99ac49a389df66936b87fb411a3c1e1a1 not found: ID does not exist" Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.892026 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qcvrn"] Dec 01 03:08:09 crc kubenswrapper[4880]: I1201 03:08:09.896794 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qcvrn"] Dec 01 03:08:10 crc kubenswrapper[4880]: I1201 03:08:10.800967 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747403d3-576b-4621-8cb3-b9122348ec98" path="/var/lib/kubelet/pods/747403d3-576b-4621-8cb3-b9122348ec98/volumes" Dec 01 03:08:11 crc kubenswrapper[4880]: I1201 03:08:11.861042 4880 generic.go:334] "Generic (PLEG): container finished" podID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerID="8a370fb853222273969636b0f200dacfbcf34eabc049ac4154b95d5d6826be82" exitCode=0 Dec 01 03:08:11 crc kubenswrapper[4880]: I1201 03:08:11.861091 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" event={"ID":"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411","Type":"ContainerDied","Data":"8a370fb853222273969636b0f200dacfbcf34eabc049ac4154b95d5d6826be82"} Dec 01 03:08:12 crc kubenswrapper[4880]: I1201 03:08:12.872227 4880 generic.go:334] "Generic (PLEG): container finished" podID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerID="d613da95e83f74ed73df55dc3373087520026e836b57751f09f15cac6842c065" exitCode=0 Dec 01 03:08:12 crc kubenswrapper[4880]: I1201 03:08:12.872306 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" event={"ID":"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411","Type":"ContainerDied","Data":"d613da95e83f74ed73df55dc3373087520026e836b57751f09f15cac6842c065"} Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.137977 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.337180 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjt6t\" (UniqueName: \"kubernetes.io/projected/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-kube-api-access-hjt6t\") pod \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.337289 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-bundle\") pod \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.337365 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-util\") pod \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\" (UID: \"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411\") " Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.338591 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-bundle" (OuterVolumeSpecName: "bundle") pod "16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" (UID: "16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.348077 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-kube-api-access-hjt6t" (OuterVolumeSpecName: "kube-api-access-hjt6t") pod "16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" (UID: "16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411"). InnerVolumeSpecName "kube-api-access-hjt6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.350802 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-util" (OuterVolumeSpecName: "util") pod "16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" (UID: "16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.438518 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjt6t\" (UniqueName: \"kubernetes.io/projected/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-kube-api-access-hjt6t\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.438557 4880 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.438568 4880 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411-util\") on node \"crc\" DevicePath \"\"" Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.893064 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" event={"ID":"16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411","Type":"ContainerDied","Data":"911fb1326972fc4b662d53969bd5f62e0f0b73e0f05c425bcf62c08ae612de62"} Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.893136 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="911fb1326972fc4b662d53969bd5f62e0f0b73e0f05c425bcf62c08ae612de62" Dec 01 03:08:14 crc kubenswrapper[4880]: I1201 03:08:14.893153 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83nnzr5" Dec 01 03:08:17 crc kubenswrapper[4880]: I1201 03:08:17.369176 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:08:17 crc kubenswrapper[4880]: I1201 03:08:17.369575 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.427230 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l"] Dec 01 03:08:23 crc kubenswrapper[4880]: E1201 03:08:23.427910 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerName="pull" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.427922 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerName="pull" Dec 01 03:08:23 crc kubenswrapper[4880]: E1201 03:08:23.427935 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerName="util" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.427941 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerName="util" Dec 01 03:08:23 crc kubenswrapper[4880]: E1201 03:08:23.427949 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747403d3-576b-4621-8cb3-b9122348ec98" containerName="console" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.427954 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="747403d3-576b-4621-8cb3-b9122348ec98" containerName="console" Dec 01 03:08:23 crc kubenswrapper[4880]: E1201 03:08:23.427964 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerName="extract" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.427970 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerName="extract" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.428055 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="16cfa0cb-d306-4e2f-8ff2-35ffdcf7c411" containerName="extract" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.428069 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="747403d3-576b-4621-8cb3-b9122348ec98" containerName="console" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.428397 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.431407 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.431587 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.431796 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-h59rx" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.431949 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.432101 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.452504 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l"] Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.466062 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6a03958-2974-47c2-aa78-dbcaafa5c917-webhook-cert\") pod \"metallb-operator-controller-manager-786f4989fd-hlf9l\" (UID: \"d6a03958-2974-47c2-aa78-dbcaafa5c917\") " pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.466117 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d6a03958-2974-47c2-aa78-dbcaafa5c917-apiservice-cert\") pod \"metallb-operator-controller-manager-786f4989fd-hlf9l\" (UID: \"d6a03958-2974-47c2-aa78-dbcaafa5c917\") " pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.466345 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dx8d\" (UniqueName: \"kubernetes.io/projected/d6a03958-2974-47c2-aa78-dbcaafa5c917-kube-api-access-4dx8d\") pod \"metallb-operator-controller-manager-786f4989fd-hlf9l\" (UID: \"d6a03958-2974-47c2-aa78-dbcaafa5c917\") " pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.567378 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dx8d\" (UniqueName: \"kubernetes.io/projected/d6a03958-2974-47c2-aa78-dbcaafa5c917-kube-api-access-4dx8d\") pod \"metallb-operator-controller-manager-786f4989fd-hlf9l\" (UID: \"d6a03958-2974-47c2-aa78-dbcaafa5c917\") " pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.567436 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6a03958-2974-47c2-aa78-dbcaafa5c917-webhook-cert\") pod \"metallb-operator-controller-manager-786f4989fd-hlf9l\" (UID: \"d6a03958-2974-47c2-aa78-dbcaafa5c917\") " pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.567468 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d6a03958-2974-47c2-aa78-dbcaafa5c917-apiservice-cert\") pod \"metallb-operator-controller-manager-786f4989fd-hlf9l\" (UID: \"d6a03958-2974-47c2-aa78-dbcaafa5c917\") " pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.574612 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6a03958-2974-47c2-aa78-dbcaafa5c917-webhook-cert\") pod \"metallb-operator-controller-manager-786f4989fd-hlf9l\" (UID: \"d6a03958-2974-47c2-aa78-dbcaafa5c917\") " pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.575313 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d6a03958-2974-47c2-aa78-dbcaafa5c917-apiservice-cert\") pod \"metallb-operator-controller-manager-786f4989fd-hlf9l\" (UID: \"d6a03958-2974-47c2-aa78-dbcaafa5c917\") " pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.595311 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dx8d\" (UniqueName: \"kubernetes.io/projected/d6a03958-2974-47c2-aa78-dbcaafa5c917-kube-api-access-4dx8d\") pod \"metallb-operator-controller-manager-786f4989fd-hlf9l\" (UID: \"d6a03958-2974-47c2-aa78-dbcaafa5c917\") " pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.741694 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.769511 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k"] Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.770378 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.776203 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-fh64x" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.776369 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.776503 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.785813 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k"] Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.872500 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da6e7740-b805-4381-bf0f-cec58f73c509-webhook-cert\") pod \"metallb-operator-webhook-server-d795c9c7f-ql54k\" (UID: \"da6e7740-b805-4381-bf0f-cec58f73c509\") " pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.872887 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps924\" (UniqueName: \"kubernetes.io/projected/da6e7740-b805-4381-bf0f-cec58f73c509-kube-api-access-ps924\") pod \"metallb-operator-webhook-server-d795c9c7f-ql54k\" (UID: \"da6e7740-b805-4381-bf0f-cec58f73c509\") " pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.872913 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da6e7740-b805-4381-bf0f-cec58f73c509-apiservice-cert\") pod \"metallb-operator-webhook-server-d795c9c7f-ql54k\" (UID: \"da6e7740-b805-4381-bf0f-cec58f73c509\") " pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.973765 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da6e7740-b805-4381-bf0f-cec58f73c509-webhook-cert\") pod \"metallb-operator-webhook-server-d795c9c7f-ql54k\" (UID: \"da6e7740-b805-4381-bf0f-cec58f73c509\") " pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.973840 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps924\" (UniqueName: \"kubernetes.io/projected/da6e7740-b805-4381-bf0f-cec58f73c509-kube-api-access-ps924\") pod \"metallb-operator-webhook-server-d795c9c7f-ql54k\" (UID: \"da6e7740-b805-4381-bf0f-cec58f73c509\") " pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.973865 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da6e7740-b805-4381-bf0f-cec58f73c509-apiservice-cert\") pod \"metallb-operator-webhook-server-d795c9c7f-ql54k\" (UID: \"da6e7740-b805-4381-bf0f-cec58f73c509\") " pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.981554 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da6e7740-b805-4381-bf0f-cec58f73c509-webhook-cert\") pod \"metallb-operator-webhook-server-d795c9c7f-ql54k\" (UID: \"da6e7740-b805-4381-bf0f-cec58f73c509\") " pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.981625 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da6e7740-b805-4381-bf0f-cec58f73c509-apiservice-cert\") pod \"metallb-operator-webhook-server-d795c9c7f-ql54k\" (UID: \"da6e7740-b805-4381-bf0f-cec58f73c509\") " pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:23 crc kubenswrapper[4880]: I1201 03:08:23.991449 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps924\" (UniqueName: \"kubernetes.io/projected/da6e7740-b805-4381-bf0f-cec58f73c509-kube-api-access-ps924\") pod \"metallb-operator-webhook-server-d795c9c7f-ql54k\" (UID: \"da6e7740-b805-4381-bf0f-cec58f73c509\") " pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:24 crc kubenswrapper[4880]: I1201 03:08:24.031765 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l"] Dec 01 03:08:24 crc kubenswrapper[4880]: W1201 03:08:24.034267 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6a03958_2974_47c2_aa78_dbcaafa5c917.slice/crio-3081f187da0fd6feb8b1510c46be8f9ad80fd876364218f83024297be0f26470 WatchSource:0}: Error finding container 3081f187da0fd6feb8b1510c46be8f9ad80fd876364218f83024297be0f26470: Status 404 returned error can't find the container with id 3081f187da0fd6feb8b1510c46be8f9ad80fd876364218f83024297be0f26470 Dec 01 03:08:24 crc kubenswrapper[4880]: I1201 03:08:24.131151 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:24 crc kubenswrapper[4880]: I1201 03:08:24.521350 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k"] Dec 01 03:08:24 crc kubenswrapper[4880]: W1201 03:08:24.521928 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda6e7740_b805_4381_bf0f_cec58f73c509.slice/crio-6ecf78009e1860fb9b4be5765543e9b7eef8ca164857418f06de837b96f45bb8 WatchSource:0}: Error finding container 6ecf78009e1860fb9b4be5765543e9b7eef8ca164857418f06de837b96f45bb8: Status 404 returned error can't find the container with id 6ecf78009e1860fb9b4be5765543e9b7eef8ca164857418f06de837b96f45bb8 Dec 01 03:08:24 crc kubenswrapper[4880]: I1201 03:08:24.949843 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" event={"ID":"d6a03958-2974-47c2-aa78-dbcaafa5c917","Type":"ContainerStarted","Data":"3081f187da0fd6feb8b1510c46be8f9ad80fd876364218f83024297be0f26470"} Dec 01 03:08:24 crc kubenswrapper[4880]: I1201 03:08:24.951186 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" event={"ID":"da6e7740-b805-4381-bf0f-cec58f73c509","Type":"ContainerStarted","Data":"6ecf78009e1860fb9b4be5765543e9b7eef8ca164857418f06de837b96f45bb8"} Dec 01 03:08:29 crc kubenswrapper[4880]: I1201 03:08:29.992994 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" event={"ID":"da6e7740-b805-4381-bf0f-cec58f73c509","Type":"ContainerStarted","Data":"28fc822a088587241ac44c303be6f81d0d9db7391aed3ca65e9a5bce4adb92c4"} Dec 01 03:08:29 crc kubenswrapper[4880]: I1201 03:08:29.993559 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:29 crc kubenswrapper[4880]: I1201 03:08:29.996039 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" event={"ID":"d6a03958-2974-47c2-aa78-dbcaafa5c917","Type":"ContainerStarted","Data":"84c9be7e1fb7fa272ec98cdf5101f703245ff7cb650dfa1f174c79e1e1feb4ae"} Dec 01 03:08:29 crc kubenswrapper[4880]: I1201 03:08:29.996241 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:08:30 crc kubenswrapper[4880]: I1201 03:08:30.026017 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" podStartSLOduration=1.9445602119999998 podStartE2EDuration="7.025996882s" podCreationTimestamp="2025-12-01 03:08:23 +0000 UTC" firstStartedPulling="2025-12-01 03:08:24.523910082 +0000 UTC m=+734.035164454" lastFinishedPulling="2025-12-01 03:08:29.605346742 +0000 UTC m=+739.116601124" observedRunningTime="2025-12-01 03:08:30.020576034 +0000 UTC m=+739.531830416" watchObservedRunningTime="2025-12-01 03:08:30.025996882 +0000 UTC m=+739.537251274" Dec 01 03:08:30 crc kubenswrapper[4880]: I1201 03:08:30.051805 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" podStartSLOduration=3.239249744 podStartE2EDuration="7.051783789s" podCreationTimestamp="2025-12-01 03:08:23 +0000 UTC" firstStartedPulling="2025-12-01 03:08:24.036962113 +0000 UTC m=+733.548216485" lastFinishedPulling="2025-12-01 03:08:27.849496148 +0000 UTC m=+737.360750530" observedRunningTime="2025-12-01 03:08:30.046865024 +0000 UTC m=+739.558119406" watchObservedRunningTime="2025-12-01 03:08:30.051783789 +0000 UTC m=+739.563038171" Dec 01 03:08:44 crc kubenswrapper[4880]: I1201 03:08:44.140183 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-d795c9c7f-ql54k" Dec 01 03:08:47 crc kubenswrapper[4880]: I1201 03:08:47.369526 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:08:47 crc kubenswrapper[4880]: I1201 03:08:47.370758 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:08:54 crc kubenswrapper[4880]: I1201 03:08:54.240072 4880 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 03:09:03 crc kubenswrapper[4880]: I1201 03:09:03.745646 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-786f4989fd-hlf9l" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.434883 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk"] Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.435956 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.437513 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.449575 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gbhqg" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.451689 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cq6rj"] Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.454427 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.457895 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.459992 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.467849 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk"] Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.516824 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g8jrs"] Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.517604 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.520518 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.520646 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.520844 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-24q7c" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522300 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a84b8fb-7af8-4aa3-9e90-206299d83539-metrics-certs\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522339 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-frr-sockets\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522360 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-metrics-certs\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522379 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj2ft\" (UniqueName: \"kubernetes.io/projected/8bcd2bf0-0f3e-4294-9e67-1d86675063a1-kube-api-access-bj2ft\") pod \"frr-k8s-webhook-server-7fcb986d4-l8tdk\" (UID: \"8bcd2bf0-0f3e-4294-9e67-1d86675063a1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522397 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5703a8ce-3290-4e92-b52b-bcb212eca6eb-metallb-excludel2\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522422 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qztc\" (UniqueName: \"kubernetes.io/projected/5703a8ce-3290-4e92-b52b-bcb212eca6eb-kube-api-access-4qztc\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522440 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mqk7\" (UniqueName: \"kubernetes.io/projected/1a84b8fb-7af8-4aa3-9e90-206299d83539-kube-api-access-2mqk7\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522462 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-frr-conf\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522484 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-reloader\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522508 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1a84b8fb-7af8-4aa3-9e90-206299d83539-frr-startup\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522526 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bcd2bf0-0f3e-4294-9e67-1d86675063a1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-l8tdk\" (UID: \"8bcd2bf0-0f3e-4294-9e67-1d86675063a1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522612 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-metrics\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522644 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-memberlist\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.522843 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.529480 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-jlpg7"] Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.530235 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.531771 4880 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.551760 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jlpg7"] Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623534 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-metrics\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623611 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-memberlist\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623632 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a84b8fb-7af8-4aa3-9e90-206299d83539-metrics-certs\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623652 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-frr-sockets\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623673 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-metrics-certs\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623693 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj2ft\" (UniqueName: \"kubernetes.io/projected/8bcd2bf0-0f3e-4294-9e67-1d86675063a1-kube-api-access-bj2ft\") pod \"frr-k8s-webhook-server-7fcb986d4-l8tdk\" (UID: \"8bcd2bf0-0f3e-4294-9e67-1d86675063a1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623711 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5703a8ce-3290-4e92-b52b-bcb212eca6eb-metallb-excludel2\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623737 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qztc\" (UniqueName: \"kubernetes.io/projected/5703a8ce-3290-4e92-b52b-bcb212eca6eb-kube-api-access-4qztc\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623758 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqs4j\" (UniqueName: \"kubernetes.io/projected/83cfc261-63b5-49cb-b398-47da2fde05fc-kube-api-access-xqs4j\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623778 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mqk7\" (UniqueName: \"kubernetes.io/projected/1a84b8fb-7af8-4aa3-9e90-206299d83539-kube-api-access-2mqk7\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623799 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-frr-conf\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623815 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-reloader\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623837 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1a84b8fb-7af8-4aa3-9e90-206299d83539-frr-startup\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623852 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bcd2bf0-0f3e-4294-9e67-1d86675063a1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-l8tdk\" (UID: \"8bcd2bf0-0f3e-4294-9e67-1d86675063a1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623887 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83cfc261-63b5-49cb-b398-47da2fde05fc-metrics-certs\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.623904 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83cfc261-63b5-49cb-b398-47da2fde05fc-cert\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.624014 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-metrics\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.624234 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-frr-sockets\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: E1201 03:09:04.624365 4880 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.624400 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-frr-conf\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: E1201 03:09:04.624418 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-metrics-certs podName:5703a8ce-3290-4e92-b52b-bcb212eca6eb nodeName:}" failed. No retries permitted until 2025-12-01 03:09:05.124403131 +0000 UTC m=+774.635657503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-metrics-certs") pod "speaker-g8jrs" (UID: "5703a8ce-3290-4e92-b52b-bcb212eca6eb") : secret "speaker-certs-secret" not found Dec 01 03:09:04 crc kubenswrapper[4880]: E1201 03:09:04.624537 4880 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.624562 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1a84b8fb-7af8-4aa3-9e90-206299d83539-reloader\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: E1201 03:09:04.624611 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bcd2bf0-0f3e-4294-9e67-1d86675063a1-cert podName:8bcd2bf0-0f3e-4294-9e67-1d86675063a1 nodeName:}" failed. No retries permitted until 2025-12-01 03:09:05.124593786 +0000 UTC m=+774.635848158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8bcd2bf0-0f3e-4294-9e67-1d86675063a1-cert") pod "frr-k8s-webhook-server-7fcb986d4-l8tdk" (UID: "8bcd2bf0-0f3e-4294-9e67-1d86675063a1") : secret "frr-k8s-webhook-server-cert" not found Dec 01 03:09:04 crc kubenswrapper[4880]: E1201 03:09:04.625006 4880 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 01 03:09:04 crc kubenswrapper[4880]: E1201 03:09:04.625049 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-memberlist podName:5703a8ce-3290-4e92-b52b-bcb212eca6eb nodeName:}" failed. No retries permitted until 2025-12-01 03:09:05.125035297 +0000 UTC m=+774.636289669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-memberlist") pod "speaker-g8jrs" (UID: "5703a8ce-3290-4e92-b52b-bcb212eca6eb") : secret "metallb-memberlist" not found Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.625124 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5703a8ce-3290-4e92-b52b-bcb212eca6eb-metallb-excludel2\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.625373 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1a84b8fb-7af8-4aa3-9e90-206299d83539-frr-startup\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.639676 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a84b8fb-7af8-4aa3-9e90-206299d83539-metrics-certs\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.658333 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qztc\" (UniqueName: \"kubernetes.io/projected/5703a8ce-3290-4e92-b52b-bcb212eca6eb-kube-api-access-4qztc\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.666796 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mqk7\" (UniqueName: \"kubernetes.io/projected/1a84b8fb-7af8-4aa3-9e90-206299d83539-kube-api-access-2mqk7\") pod \"frr-k8s-cq6rj\" (UID: \"1a84b8fb-7af8-4aa3-9e90-206299d83539\") " pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.683719 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj2ft\" (UniqueName: \"kubernetes.io/projected/8bcd2bf0-0f3e-4294-9e67-1d86675063a1-kube-api-access-bj2ft\") pod \"frr-k8s-webhook-server-7fcb986d4-l8tdk\" (UID: \"8bcd2bf0-0f3e-4294-9e67-1d86675063a1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.724500 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqs4j\" (UniqueName: \"kubernetes.io/projected/83cfc261-63b5-49cb-b398-47da2fde05fc-kube-api-access-xqs4j\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.724578 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83cfc261-63b5-49cb-b398-47da2fde05fc-metrics-certs\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.724597 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83cfc261-63b5-49cb-b398-47da2fde05fc-cert\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:04 crc kubenswrapper[4880]: E1201 03:09:04.724934 4880 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 01 03:09:04 crc kubenswrapper[4880]: E1201 03:09:04.724989 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83cfc261-63b5-49cb-b398-47da2fde05fc-metrics-certs podName:83cfc261-63b5-49cb-b398-47da2fde05fc nodeName:}" failed. No retries permitted until 2025-12-01 03:09:05.224974964 +0000 UTC m=+774.736229336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83cfc261-63b5-49cb-b398-47da2fde05fc-metrics-certs") pod "controller-f8648f98b-jlpg7" (UID: "83cfc261-63b5-49cb-b398-47da2fde05fc") : secret "controller-certs-secret" not found Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.730084 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/83cfc261-63b5-49cb-b398-47da2fde05fc-cert\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.751525 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqs4j\" (UniqueName: \"kubernetes.io/projected/83cfc261-63b5-49cb-b398-47da2fde05fc-kube-api-access-xqs4j\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:04 crc kubenswrapper[4880]: I1201 03:09:04.766389 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.130570 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-metrics-certs\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.131060 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bcd2bf0-0f3e-4294-9e67-1d86675063a1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-l8tdk\" (UID: \"8bcd2bf0-0f3e-4294-9e67-1d86675063a1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.131181 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-memberlist\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:05 crc kubenswrapper[4880]: E1201 03:09:05.131360 4880 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 01 03:09:05 crc kubenswrapper[4880]: E1201 03:09:05.131451 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-memberlist podName:5703a8ce-3290-4e92-b52b-bcb212eca6eb nodeName:}" failed. No retries permitted until 2025-12-01 03:09:06.131429362 +0000 UTC m=+775.642683774 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-memberlist") pod "speaker-g8jrs" (UID: "5703a8ce-3290-4e92-b52b-bcb212eca6eb") : secret "metallb-memberlist" not found Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.134989 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bcd2bf0-0f3e-4294-9e67-1d86675063a1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-l8tdk\" (UID: \"8bcd2bf0-0f3e-4294-9e67-1d86675063a1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.135196 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-metrics-certs\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.222496 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerStarted","Data":"9486e34b0e530b7af2b1b92b95715549845d8e52778314a285d6d7db68b4f2a0"} Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.232062 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83cfc261-63b5-49cb-b398-47da2fde05fc-metrics-certs\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.235560 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83cfc261-63b5-49cb-b398-47da2fde05fc-metrics-certs\") pod \"controller-f8648f98b-jlpg7\" (UID: \"83cfc261-63b5-49cb-b398-47da2fde05fc\") " pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.352375 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.448731 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.707852 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jlpg7"] Dec 01 03:09:05 crc kubenswrapper[4880]: I1201 03:09:05.835271 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk"] Dec 01 03:09:05 crc kubenswrapper[4880]: W1201 03:09:05.838415 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bcd2bf0_0f3e_4294_9e67_1d86675063a1.slice/crio-24d1e12c6fe9b859f336353c37c720afec154edd9d1d64ae2ddd16f0b449c6b4 WatchSource:0}: Error finding container 24d1e12c6fe9b859f336353c37c720afec154edd9d1d64ae2ddd16f0b449c6b4: Status 404 returned error can't find the container with id 24d1e12c6fe9b859f336353c37c720afec154edd9d1d64ae2ddd16f0b449c6b4 Dec 01 03:09:06 crc kubenswrapper[4880]: I1201 03:09:06.142133 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-memberlist\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:06 crc kubenswrapper[4880]: I1201 03:09:06.148372 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5703a8ce-3290-4e92-b52b-bcb212eca6eb-memberlist\") pod \"speaker-g8jrs\" (UID: \"5703a8ce-3290-4e92-b52b-bcb212eca6eb\") " pod="metallb-system/speaker-g8jrs" Dec 01 03:09:06 crc kubenswrapper[4880]: I1201 03:09:06.232526 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jlpg7" event={"ID":"83cfc261-63b5-49cb-b398-47da2fde05fc","Type":"ContainerStarted","Data":"2d072047949dfc585b81e0140e3c8ff731bd3f53ee66709cfccf5a6808de2c1a"} Dec 01 03:09:06 crc kubenswrapper[4880]: I1201 03:09:06.232612 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jlpg7" event={"ID":"83cfc261-63b5-49cb-b398-47da2fde05fc","Type":"ContainerStarted","Data":"218c83b2b5446a0079353a427f7cc38449eb55cfb4c78b8e2a7b707997ca28bd"} Dec 01 03:09:06 crc kubenswrapper[4880]: I1201 03:09:06.232639 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jlpg7" event={"ID":"83cfc261-63b5-49cb-b398-47da2fde05fc","Type":"ContainerStarted","Data":"fdd78d380c047451bcba41122d52d847f6f2e027b86e112a0bcdd445d545fff9"} Dec 01 03:09:06 crc kubenswrapper[4880]: I1201 03:09:06.232667 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:06 crc kubenswrapper[4880]: I1201 03:09:06.233839 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" event={"ID":"8bcd2bf0-0f3e-4294-9e67-1d86675063a1","Type":"ContainerStarted","Data":"24d1e12c6fe9b859f336353c37c720afec154edd9d1d64ae2ddd16f0b449c6b4"} Dec 01 03:09:06 crc kubenswrapper[4880]: I1201 03:09:06.253168 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-jlpg7" podStartSLOduration=2.253149288 podStartE2EDuration="2.253149288s" podCreationTimestamp="2025-12-01 03:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:09:06.248795867 +0000 UTC m=+775.760050239" watchObservedRunningTime="2025-12-01 03:09:06.253149288 +0000 UTC m=+775.764403660" Dec 01 03:09:06 crc kubenswrapper[4880]: I1201 03:09:06.333315 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g8jrs" Dec 01 03:09:06 crc kubenswrapper[4880]: W1201 03:09:06.349879 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5703a8ce_3290_4e92_b52b_bcb212eca6eb.slice/crio-848903072b01223ffc10d5f3d117b5d0899e23ce6640d0d308afdcd794c5fd64 WatchSource:0}: Error finding container 848903072b01223ffc10d5f3d117b5d0899e23ce6640d0d308afdcd794c5fd64: Status 404 returned error can't find the container with id 848903072b01223ffc10d5f3d117b5d0899e23ce6640d0d308afdcd794c5fd64 Dec 01 03:09:07 crc kubenswrapper[4880]: I1201 03:09:07.249791 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g8jrs" event={"ID":"5703a8ce-3290-4e92-b52b-bcb212eca6eb","Type":"ContainerStarted","Data":"d0778689044e423a164b42e5ea019024ee8641f589b53704f476b9ea72a6edbd"} Dec 01 03:09:07 crc kubenswrapper[4880]: I1201 03:09:07.250114 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g8jrs" event={"ID":"5703a8ce-3290-4e92-b52b-bcb212eca6eb","Type":"ContainerStarted","Data":"c195b700c044e25e3cce4421fdabbc0ce311dc104b72d77367a1eb0de56a2a2e"} Dec 01 03:09:07 crc kubenswrapper[4880]: I1201 03:09:07.250128 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g8jrs" event={"ID":"5703a8ce-3290-4e92-b52b-bcb212eca6eb","Type":"ContainerStarted","Data":"848903072b01223ffc10d5f3d117b5d0899e23ce6640d0d308afdcd794c5fd64"} Dec 01 03:09:07 crc kubenswrapper[4880]: I1201 03:09:07.250302 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g8jrs" Dec 01 03:09:07 crc kubenswrapper[4880]: I1201 03:09:07.280447 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g8jrs" podStartSLOduration=3.280432885 podStartE2EDuration="3.280432885s" podCreationTimestamp="2025-12-01 03:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:09:07.276270959 +0000 UTC m=+776.787525331" watchObservedRunningTime="2025-12-01 03:09:07.280432885 +0000 UTC m=+776.791687257" Dec 01 03:09:13 crc kubenswrapper[4880]: I1201 03:09:13.288260 4880 generic.go:334] "Generic (PLEG): container finished" podID="1a84b8fb-7af8-4aa3-9e90-206299d83539" containerID="04248b51f1388a3f6314df740806951b431e5839acf0dd2888bf22bfa746c0b1" exitCode=0 Dec 01 03:09:13 crc kubenswrapper[4880]: I1201 03:09:13.288330 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerDied","Data":"04248b51f1388a3f6314df740806951b431e5839acf0dd2888bf22bfa746c0b1"} Dec 01 03:09:13 crc kubenswrapper[4880]: I1201 03:09:13.291315 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" event={"ID":"8bcd2bf0-0f3e-4294-9e67-1d86675063a1","Type":"ContainerStarted","Data":"bb3906e87638c25c7e85edbdfe8f58cf11a1ced5e00414a6f0bc871598027f11"} Dec 01 03:09:13 crc kubenswrapper[4880]: I1201 03:09:13.291630 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:13 crc kubenswrapper[4880]: I1201 03:09:13.358494 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" podStartSLOduration=2.577972936 podStartE2EDuration="9.358476526s" podCreationTimestamp="2025-12-01 03:09:04 +0000 UTC" firstStartedPulling="2025-12-01 03:09:05.84118611 +0000 UTC m=+775.352440492" lastFinishedPulling="2025-12-01 03:09:12.62168971 +0000 UTC m=+782.132944082" observedRunningTime="2025-12-01 03:09:13.351342824 +0000 UTC m=+782.862597226" watchObservedRunningTime="2025-12-01 03:09:13.358476526 +0000 UTC m=+782.869730908" Dec 01 03:09:14 crc kubenswrapper[4880]: I1201 03:09:14.299999 4880 generic.go:334] "Generic (PLEG): container finished" podID="1a84b8fb-7af8-4aa3-9e90-206299d83539" containerID="bd8be73e399fc936806793ca4c1811d356e8e4e23c3bb53d1b745fc4b7d18c9f" exitCode=0 Dec 01 03:09:14 crc kubenswrapper[4880]: I1201 03:09:14.300118 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerDied","Data":"bd8be73e399fc936806793ca4c1811d356e8e4e23c3bb53d1b745fc4b7d18c9f"} Dec 01 03:09:15 crc kubenswrapper[4880]: I1201 03:09:15.310926 4880 generic.go:334] "Generic (PLEG): container finished" podID="1a84b8fb-7af8-4aa3-9e90-206299d83539" containerID="112fd0cd0ca67affdae2a1416960ce1061d825eb086c7c48b473f0ca0f34b997" exitCode=0 Dec 01 03:09:15 crc kubenswrapper[4880]: I1201 03:09:15.310999 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerDied","Data":"112fd0cd0ca67affdae2a1416960ce1061d825eb086c7c48b473f0ca0f34b997"} Dec 01 03:09:15 crc kubenswrapper[4880]: I1201 03:09:15.454288 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-jlpg7" Dec 01 03:09:16 crc kubenswrapper[4880]: I1201 03:09:16.331795 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerStarted","Data":"21e8f1f49809c190e616229192f5bed8904b83b14d225a94d53bf76616817c63"} Dec 01 03:09:16 crc kubenswrapper[4880]: I1201 03:09:16.331852 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerStarted","Data":"009e4fcbc72da39571a7168262215a3bb47e0eb6f5f2d030ec79f454937b8104"} Dec 01 03:09:16 crc kubenswrapper[4880]: I1201 03:09:16.331921 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerStarted","Data":"095d5ee6381b4c3d55734b7761fd44c14d92395cdf0f99c9b6927effb8e3a90f"} Dec 01 03:09:16 crc kubenswrapper[4880]: I1201 03:09:16.331941 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerStarted","Data":"323e3a420a36ea3adcb43411b783b04e28e6db7531973b023a0f41e8caec93db"} Dec 01 03:09:16 crc kubenswrapper[4880]: I1201 03:09:16.331955 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerStarted","Data":"f16b958a844c22336547ae3dfe97e97a52c92bfa806e3e21008d74d5d2b96abe"} Dec 01 03:09:16 crc kubenswrapper[4880]: I1201 03:09:16.338301 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g8jrs" Dec 01 03:09:17 crc kubenswrapper[4880]: I1201 03:09:17.346667 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cq6rj" event={"ID":"1a84b8fb-7af8-4aa3-9e90-206299d83539","Type":"ContainerStarted","Data":"b7b2a56de722520710960bad357e0181f25e405e32ae39583057ddcf8814c0dd"} Dec 01 03:09:17 crc kubenswrapper[4880]: I1201 03:09:17.347038 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:17 crc kubenswrapper[4880]: I1201 03:09:17.371725 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:09:17 crc kubenswrapper[4880]: I1201 03:09:17.371776 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:09:17 crc kubenswrapper[4880]: I1201 03:09:17.371815 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:09:17 crc kubenswrapper[4880]: I1201 03:09:17.372521 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e7e08cb7118ecb74645ada937d5a46b564fd983b8d99301ceea950ba427688d"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:09:17 crc kubenswrapper[4880]: I1201 03:09:17.372622 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://9e7e08cb7118ecb74645ada937d5a46b564fd983b8d99301ceea950ba427688d" gracePeriod=600 Dec 01 03:09:17 crc kubenswrapper[4880]: I1201 03:09:17.393929 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cq6rj" podStartSLOduration=5.68334348 podStartE2EDuration="13.393900002s" podCreationTimestamp="2025-12-01 03:09:04 +0000 UTC" firstStartedPulling="2025-12-01 03:09:04.886065149 +0000 UTC m=+774.397319521" lastFinishedPulling="2025-12-01 03:09:12.596621671 +0000 UTC m=+782.107876043" observedRunningTime="2025-12-01 03:09:17.382637155 +0000 UTC m=+786.893891577" watchObservedRunningTime="2025-12-01 03:09:17.393900002 +0000 UTC m=+786.905154414" Dec 01 03:09:18 crc kubenswrapper[4880]: I1201 03:09:18.356829 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="9e7e08cb7118ecb74645ada937d5a46b564fd983b8d99301ceea950ba427688d" exitCode=0 Dec 01 03:09:18 crc kubenswrapper[4880]: I1201 03:09:18.357002 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"9e7e08cb7118ecb74645ada937d5a46b564fd983b8d99301ceea950ba427688d"} Dec 01 03:09:18 crc kubenswrapper[4880]: I1201 03:09:18.357346 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"2d75a52daa0e2a7f2599a1e892312d328b61520f98232bcc7cdb455390b50937"} Dec 01 03:09:18 crc kubenswrapper[4880]: I1201 03:09:18.357384 4880 scope.go:117] "RemoveContainer" containerID="8263d28df2319abe6fc219ddd5fbeb3c45e1155279155d321212f1fcda96f18d" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.191438 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cvbjp"] Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.192493 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvbjp" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.196573 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.196886 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hskrp" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.197024 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.218610 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cvbjp"] Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.316973 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjf2r\" (UniqueName: \"kubernetes.io/projected/27acc303-e579-4616-815e-11de04160c26-kube-api-access-kjf2r\") pod \"openstack-operator-index-cvbjp\" (UID: \"27acc303-e579-4616-815e-11de04160c26\") " pod="openstack-operators/openstack-operator-index-cvbjp" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.418230 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjf2r\" (UniqueName: \"kubernetes.io/projected/27acc303-e579-4616-815e-11de04160c26-kube-api-access-kjf2r\") pod \"openstack-operator-index-cvbjp\" (UID: \"27acc303-e579-4616-815e-11de04160c26\") " pod="openstack-operators/openstack-operator-index-cvbjp" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.439261 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjf2r\" (UniqueName: \"kubernetes.io/projected/27acc303-e579-4616-815e-11de04160c26-kube-api-access-kjf2r\") pod \"openstack-operator-index-cvbjp\" (UID: \"27acc303-e579-4616-815e-11de04160c26\") " pod="openstack-operators/openstack-operator-index-cvbjp" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.508296 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvbjp" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.767145 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.814863 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:19 crc kubenswrapper[4880]: I1201 03:09:19.917685 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cvbjp"] Dec 01 03:09:20 crc kubenswrapper[4880]: I1201 03:09:20.378155 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvbjp" event={"ID":"27acc303-e579-4616-815e-11de04160c26","Type":"ContainerStarted","Data":"f5fd8cb42f7e26b2dca998f2ddcfafbbba24e9ff4e7ca9656dcf9a63087a852f"} Dec 01 03:09:22 crc kubenswrapper[4880]: I1201 03:09:22.359500 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cvbjp"] Dec 01 03:09:22 crc kubenswrapper[4880]: I1201 03:09:22.983697 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-lf2jt"] Dec 01 03:09:22 crc kubenswrapper[4880]: I1201 03:09:22.985815 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lf2jt" Dec 01 03:09:22 crc kubenswrapper[4880]: I1201 03:09:22.995964 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lf2jt"] Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.069664 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v76r\" (UniqueName: \"kubernetes.io/projected/b5b1e358-cf8b-4c6c-92d7-2e34fb759cc3-kube-api-access-7v76r\") pod \"openstack-operator-index-lf2jt\" (UID: \"b5b1e358-cf8b-4c6c-92d7-2e34fb759cc3\") " pod="openstack-operators/openstack-operator-index-lf2jt" Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.171528 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v76r\" (UniqueName: \"kubernetes.io/projected/b5b1e358-cf8b-4c6c-92d7-2e34fb759cc3-kube-api-access-7v76r\") pod \"openstack-operator-index-lf2jt\" (UID: \"b5b1e358-cf8b-4c6c-92d7-2e34fb759cc3\") " pod="openstack-operators/openstack-operator-index-lf2jt" Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.205859 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v76r\" (UniqueName: \"kubernetes.io/projected/b5b1e358-cf8b-4c6c-92d7-2e34fb759cc3-kube-api-access-7v76r\") pod \"openstack-operator-index-lf2jt\" (UID: \"b5b1e358-cf8b-4c6c-92d7-2e34fb759cc3\") " pod="openstack-operators/openstack-operator-index-lf2jt" Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.323338 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-lf2jt" Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.400410 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvbjp" event={"ID":"27acc303-e579-4616-815e-11de04160c26","Type":"ContainerStarted","Data":"12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375"} Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.400974 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cvbjp" podUID="27acc303-e579-4616-815e-11de04160c26" containerName="registry-server" containerID="cri-o://12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375" gracePeriod=2 Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.421013 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cvbjp" podStartSLOduration=1.96710443 podStartE2EDuration="4.420994784s" podCreationTimestamp="2025-12-01 03:09:19 +0000 UTC" firstStartedPulling="2025-12-01 03:09:19.931343965 +0000 UTC m=+789.442598337" lastFinishedPulling="2025-12-01 03:09:22.385234319 +0000 UTC m=+791.896488691" observedRunningTime="2025-12-01 03:09:23.418829089 +0000 UTC m=+792.930083471" watchObservedRunningTime="2025-12-01 03:09:23.420994784 +0000 UTC m=+792.932249156" Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.591308 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-lf2jt"] Dec 01 03:09:23 crc kubenswrapper[4880]: W1201 03:09:23.605070 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5b1e358_cf8b_4c6c_92d7_2e34fb759cc3.slice/crio-24335836ba35dc36a7383518d6fb3343e4cdb8cdd8c1299a4211d41b74fc30df WatchSource:0}: Error finding container 24335836ba35dc36a7383518d6fb3343e4cdb8cdd8c1299a4211d41b74fc30df: Status 404 returned error can't find the container with id 24335836ba35dc36a7383518d6fb3343e4cdb8cdd8c1299a4211d41b74fc30df Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.791120 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvbjp" Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.882089 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjf2r\" (UniqueName: \"kubernetes.io/projected/27acc303-e579-4616-815e-11de04160c26-kube-api-access-kjf2r\") pod \"27acc303-e579-4616-815e-11de04160c26\" (UID: \"27acc303-e579-4616-815e-11de04160c26\") " Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.887258 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27acc303-e579-4616-815e-11de04160c26-kube-api-access-kjf2r" (OuterVolumeSpecName: "kube-api-access-kjf2r") pod "27acc303-e579-4616-815e-11de04160c26" (UID: "27acc303-e579-4616-815e-11de04160c26"). InnerVolumeSpecName "kube-api-access-kjf2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:09:23 crc kubenswrapper[4880]: I1201 03:09:23.985441 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjf2r\" (UniqueName: \"kubernetes.io/projected/27acc303-e579-4616-815e-11de04160c26-kube-api-access-kjf2r\") on node \"crc\" DevicePath \"\"" Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.411114 4880 generic.go:334] "Generic (PLEG): container finished" podID="27acc303-e579-4616-815e-11de04160c26" containerID="12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375" exitCode=0 Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.411160 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvbjp" Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.411181 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvbjp" event={"ID":"27acc303-e579-4616-815e-11de04160c26","Type":"ContainerDied","Data":"12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375"} Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.412522 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvbjp" event={"ID":"27acc303-e579-4616-815e-11de04160c26","Type":"ContainerDied","Data":"f5fd8cb42f7e26b2dca998f2ddcfafbbba24e9ff4e7ca9656dcf9a63087a852f"} Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.412541 4880 scope.go:117] "RemoveContainer" containerID="12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375" Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.416383 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lf2jt" event={"ID":"b5b1e358-cf8b-4c6c-92d7-2e34fb759cc3","Type":"ContainerStarted","Data":"8c8c5222b3694806eac1a0904740caa9ec714993820748dff61bdd787680f38b"} Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.417373 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-lf2jt" event={"ID":"b5b1e358-cf8b-4c6c-92d7-2e34fb759cc3","Type":"ContainerStarted","Data":"24335836ba35dc36a7383518d6fb3343e4cdb8cdd8c1299a4211d41b74fc30df"} Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.438574 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-lf2jt" podStartSLOduration=2.394897662 podStartE2EDuration="2.438538514s" podCreationTimestamp="2025-12-01 03:09:22 +0000 UTC" firstStartedPulling="2025-12-01 03:09:23.608623826 +0000 UTC m=+793.119878208" lastFinishedPulling="2025-12-01 03:09:23.652264688 +0000 UTC m=+793.163519060" observedRunningTime="2025-12-01 03:09:24.43797543 +0000 UTC m=+793.949229882" watchObservedRunningTime="2025-12-01 03:09:24.438538514 +0000 UTC m=+793.949792886" Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.452808 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cvbjp"] Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.452989 4880 scope.go:117] "RemoveContainer" containerID="12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375" Dec 01 03:09:24 crc kubenswrapper[4880]: E1201 03:09:24.455334 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375\": container with ID starting with 12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375 not found: ID does not exist" containerID="12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375" Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.455403 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375"} err="failed to get container status \"12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375\": rpc error: code = NotFound desc = could not find container \"12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375\": container with ID starting with 12e85985c14e4fd6f4610298b227cbd44ae3f7b57c661cdcfbea124ebd4d4375 not found: ID does not exist" Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.456986 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-cvbjp"] Dec 01 03:09:24 crc kubenswrapper[4880]: I1201 03:09:24.796639 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27acc303-e579-4616-815e-11de04160c26" path="/var/lib/kubelet/pods/27acc303-e579-4616-815e-11de04160c26/volumes" Dec 01 03:09:25 crc kubenswrapper[4880]: I1201 03:09:25.359258 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-l8tdk" Dec 01 03:09:33 crc kubenswrapper[4880]: I1201 03:09:33.323845 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-lf2jt" Dec 01 03:09:33 crc kubenswrapper[4880]: I1201 03:09:33.324708 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-lf2jt" Dec 01 03:09:33 crc kubenswrapper[4880]: I1201 03:09:33.372001 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-lf2jt" Dec 01 03:09:33 crc kubenswrapper[4880]: I1201 03:09:33.542472 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-lf2jt" Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.770354 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cq6rj" Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.860387 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92"] Dec 01 03:09:34 crc kubenswrapper[4880]: E1201 03:09:34.860590 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27acc303-e579-4616-815e-11de04160c26" containerName="registry-server" Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.860600 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="27acc303-e579-4616-815e-11de04160c26" containerName="registry-server" Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.860709 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="27acc303-e579-4616-815e-11de04160c26" containerName="registry-server" Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.861452 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.864260 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dqv94" Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.877095 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92"] Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.979615 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxsnd\" (UniqueName: \"kubernetes.io/projected/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-kube-api-access-gxsnd\") pod \"76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.979938 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-bundle\") pod \"76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:34 crc kubenswrapper[4880]: I1201 03:09:34.980075 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-util\") pod \"76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:35 crc kubenswrapper[4880]: I1201 03:09:35.082515 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-util\") pod \"76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:35 crc kubenswrapper[4880]: I1201 03:09:35.082718 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxsnd\" (UniqueName: \"kubernetes.io/projected/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-kube-api-access-gxsnd\") pod \"76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:35 crc kubenswrapper[4880]: I1201 03:09:35.082816 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-bundle\") pod \"76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:35 crc kubenswrapper[4880]: I1201 03:09:35.084052 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-util\") pod \"76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:35 crc kubenswrapper[4880]: I1201 03:09:35.084061 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-bundle\") pod \"76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:35 crc kubenswrapper[4880]: I1201 03:09:35.108334 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxsnd\" (UniqueName: \"kubernetes.io/projected/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-kube-api-access-gxsnd\") pod \"76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:35 crc kubenswrapper[4880]: I1201 03:09:35.186249 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:35 crc kubenswrapper[4880]: I1201 03:09:35.671730 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92"] Dec 01 03:09:35 crc kubenswrapper[4880]: W1201 03:09:35.677127 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf71bc5_2ec9_40df_bdbc_2809433a8a6e.slice/crio-a93a5c314dd8b0637b5b65529b5f53c486e12c6d0320b4c43a23141f74498069 WatchSource:0}: Error finding container a93a5c314dd8b0637b5b65529b5f53c486e12c6d0320b4c43a23141f74498069: Status 404 returned error can't find the container with id a93a5c314dd8b0637b5b65529b5f53c486e12c6d0320b4c43a23141f74498069 Dec 01 03:09:36 crc kubenswrapper[4880]: I1201 03:09:36.541525 4880 generic.go:334] "Generic (PLEG): container finished" podID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerID="3e6bfe92b3a49578ac31c9827addc0cf09b003e76e936ad72c84754125a01bfe" exitCode=0 Dec 01 03:09:36 crc kubenswrapper[4880]: I1201 03:09:36.541751 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" event={"ID":"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e","Type":"ContainerDied","Data":"3e6bfe92b3a49578ac31c9827addc0cf09b003e76e936ad72c84754125a01bfe"} Dec 01 03:09:36 crc kubenswrapper[4880]: I1201 03:09:36.541974 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" event={"ID":"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e","Type":"ContainerStarted","Data":"a93a5c314dd8b0637b5b65529b5f53c486e12c6d0320b4c43a23141f74498069"} Dec 01 03:09:37 crc kubenswrapper[4880]: I1201 03:09:37.553381 4880 generic.go:334] "Generic (PLEG): container finished" podID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerID="da0bdfce07de1fb3228cec0c898619f2ce6fde478de24eb54fe0f26bab1e8cc2" exitCode=0 Dec 01 03:09:37 crc kubenswrapper[4880]: I1201 03:09:37.553472 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" event={"ID":"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e","Type":"ContainerDied","Data":"da0bdfce07de1fb3228cec0c898619f2ce6fde478de24eb54fe0f26bab1e8cc2"} Dec 01 03:09:38 crc kubenswrapper[4880]: I1201 03:09:38.575993 4880 generic.go:334] "Generic (PLEG): container finished" podID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerID="77de244784e079715e2e0fb3918a6bcdd484b0e9f3a681a04cae964691ff1ced" exitCode=0 Dec 01 03:09:38 crc kubenswrapper[4880]: I1201 03:09:38.576073 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" event={"ID":"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e","Type":"ContainerDied","Data":"77de244784e079715e2e0fb3918a6bcdd484b0e9f3a681a04cae964691ff1ced"} Dec 01 03:09:39 crc kubenswrapper[4880]: I1201 03:09:39.911324 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.062676 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-util\") pod \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.062816 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-bundle\") pod \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.062936 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxsnd\" (UniqueName: \"kubernetes.io/projected/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-kube-api-access-gxsnd\") pod \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\" (UID: \"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e\") " Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.064471 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-bundle" (OuterVolumeSpecName: "bundle") pod "0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" (UID: "0bf71bc5-2ec9-40df-bdbc-2809433a8a6e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.073741 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-kube-api-access-gxsnd" (OuterVolumeSpecName: "kube-api-access-gxsnd") pod "0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" (UID: "0bf71bc5-2ec9-40df-bdbc-2809433a8a6e"). InnerVolumeSpecName "kube-api-access-gxsnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.097140 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-util" (OuterVolumeSpecName: "util") pod "0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" (UID: "0bf71bc5-2ec9-40df-bdbc-2809433a8a6e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.166278 4880 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-util\") on node \"crc\" DevicePath \"\"" Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.166387 4880 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.166416 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxsnd\" (UniqueName: \"kubernetes.io/projected/0bf71bc5-2ec9-40df-bdbc-2809433a8a6e-kube-api-access-gxsnd\") on node \"crc\" DevicePath \"\"" Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.615629 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" event={"ID":"0bf71bc5-2ec9-40df-bdbc-2809433a8a6e","Type":"ContainerDied","Data":"a93a5c314dd8b0637b5b65529b5f53c486e12c6d0320b4c43a23141f74498069"} Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.615688 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a93a5c314dd8b0637b5b65529b5f53c486e12c6d0320b4c43a23141f74498069" Dec 01 03:09:40 crc kubenswrapper[4880]: I1201 03:09:40.615784 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76e278c9c6e53d228e0007182ebb1a9ffdd2f8044cc4b07fff1d3b70317rr92" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.145451 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg"] Dec 01 03:09:47 crc kubenswrapper[4880]: E1201 03:09:47.146107 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerName="extract" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.146121 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerName="extract" Dec 01 03:09:47 crc kubenswrapper[4880]: E1201 03:09:47.146139 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerName="util" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.146146 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerName="util" Dec 01 03:09:47 crc kubenswrapper[4880]: E1201 03:09:47.146163 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerName="pull" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.146171 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerName="pull" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.146333 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf71bc5-2ec9-40df-bdbc-2809433a8a6e" containerName="extract" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.146709 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.148854 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-gkzhn" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.172344 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg"] Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.190059 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb6jj\" (UniqueName: \"kubernetes.io/projected/7d2c1d72-5629-4cd4-ab60-fcf6ce7e98c1-kube-api-access-sb6jj\") pod \"openstack-operator-controller-operator-6ddd6b47f7-w5qzg\" (UID: \"7d2c1d72-5629-4cd4-ab60-fcf6ce7e98c1\") " pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.291540 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb6jj\" (UniqueName: \"kubernetes.io/projected/7d2c1d72-5629-4cd4-ab60-fcf6ce7e98c1-kube-api-access-sb6jj\") pod \"openstack-operator-controller-operator-6ddd6b47f7-w5qzg\" (UID: \"7d2c1d72-5629-4cd4-ab60-fcf6ce7e98c1\") " pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.310828 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb6jj\" (UniqueName: \"kubernetes.io/projected/7d2c1d72-5629-4cd4-ab60-fcf6ce7e98c1-kube-api-access-sb6jj\") pod \"openstack-operator-controller-operator-6ddd6b47f7-w5qzg\" (UID: \"7d2c1d72-5629-4cd4-ab60-fcf6ce7e98c1\") " pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.461796 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" Dec 01 03:09:47 crc kubenswrapper[4880]: I1201 03:09:47.738627 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg"] Dec 01 03:09:47 crc kubenswrapper[4880]: W1201 03:09:47.750485 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d2c1d72_5629_4cd4_ab60_fcf6ce7e98c1.slice/crio-d57611038ccdb3e0734169c0d1eee31d27e231d473eb2784dcc8f02305a90e44 WatchSource:0}: Error finding container d57611038ccdb3e0734169c0d1eee31d27e231d473eb2784dcc8f02305a90e44: Status 404 returned error can't find the container with id d57611038ccdb3e0734169c0d1eee31d27e231d473eb2784dcc8f02305a90e44 Dec 01 03:09:48 crc kubenswrapper[4880]: I1201 03:09:48.671252 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" event={"ID":"7d2c1d72-5629-4cd4-ab60-fcf6ce7e98c1","Type":"ContainerStarted","Data":"d57611038ccdb3e0734169c0d1eee31d27e231d473eb2784dcc8f02305a90e44"} Dec 01 03:09:52 crc kubenswrapper[4880]: I1201 03:09:52.703908 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" event={"ID":"7d2c1d72-5629-4cd4-ab60-fcf6ce7e98c1","Type":"ContainerStarted","Data":"623eb8bc00c89dae38aee17335082fc3da6a54b385eec7f15402bb9bace28aac"} Dec 01 03:09:52 crc kubenswrapper[4880]: I1201 03:09:52.704756 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" Dec 01 03:09:52 crc kubenswrapper[4880]: I1201 03:09:52.789978 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" podStartSLOduration=1.97853477 podStartE2EDuration="5.789956848s" podCreationTimestamp="2025-12-01 03:09:47 +0000 UTC" firstStartedPulling="2025-12-01 03:09:47.75504178 +0000 UTC m=+817.266296152" lastFinishedPulling="2025-12-01 03:09:51.566463858 +0000 UTC m=+821.077718230" observedRunningTime="2025-12-01 03:09:52.779001838 +0000 UTC m=+822.290256210" watchObservedRunningTime="2025-12-01 03:09:52.789956848 +0000 UTC m=+822.301211220" Dec 01 03:09:57 crc kubenswrapper[4880]: I1201 03:09:57.466306 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6ddd6b47f7-w5qzg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.104517 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.105930 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.111335 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-b9n6t" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.115810 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.116781 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.119623 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qtj8p" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.139396 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.142788 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.143639 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.145018 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rhlx2" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.158469 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.170898 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.171763 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.184719 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ztsq8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.222257 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82xzf\" (UniqueName: \"kubernetes.io/projected/552497de-0304-488e-be9a-d9eec00f697f-kube-api-access-82xzf\") pod \"designate-operator-controller-manager-78b4bc895b-8hc88\" (UID: \"552497de-0304-488e-be9a-d9eec00f697f\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.222322 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm6kz\" (UniqueName: \"kubernetes.io/projected/d4966512-f477-492e-aec2-7a0131c9ae11-kube-api-access-jm6kz\") pod \"barbican-operator-controller-manager-7d9dfd778-6qzxb\" (UID: \"d4966512-f477-492e-aec2-7a0131c9ae11\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.222359 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhzf\" (UniqueName: \"kubernetes.io/projected/2f40c1c9-64e3-4473-922e-02cc4d62a6af-kube-api-access-qbhzf\") pod \"cinder-operator-controller-manager-859b6ccc6-wxpdg\" (UID: \"2f40c1c9-64e3-4473-922e-02cc4d62a6af\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.235053 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.237943 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.256937 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.258109 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.271050 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-6lr2p" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.285839 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.287045 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.290185 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8zqhl" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.313085 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.320669 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.321603 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.323289 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.323768 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8p6d\" (UniqueName: \"kubernetes.io/projected/10e31949-6791-42f4-ae62-c24c46fef261-kube-api-access-z8p6d\") pod \"glance-operator-controller-manager-668d9c48b9-sr86h\" (UID: \"10e31949-6791-42f4-ae62-c24c46fef261\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.323818 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqcq8\" (UniqueName: \"kubernetes.io/projected/993f5873-6998-453d-85a0-28e87d22380a-kube-api-access-lqcq8\") pod \"horizon-operator-controller-manager-68c6d99b8f-rpjrm\" (UID: \"993f5873-6998-453d-85a0-28e87d22380a\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.323850 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82xzf\" (UniqueName: \"kubernetes.io/projected/552497de-0304-488e-be9a-d9eec00f697f-kube-api-access-82xzf\") pod \"designate-operator-controller-manager-78b4bc895b-8hc88\" (UID: \"552497de-0304-488e-be9a-d9eec00f697f\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.323898 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm6kz\" (UniqueName: \"kubernetes.io/projected/d4966512-f477-492e-aec2-7a0131c9ae11-kube-api-access-jm6kz\") pod \"barbican-operator-controller-manager-7d9dfd778-6qzxb\" (UID: \"d4966512-f477-492e-aec2-7a0131c9ae11\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.323994 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbhzf\" (UniqueName: \"kubernetes.io/projected/2f40c1c9-64e3-4473-922e-02cc4d62a6af-kube-api-access-qbhzf\") pod \"cinder-operator-controller-manager-859b6ccc6-wxpdg\" (UID: \"2f40c1c9-64e3-4473-922e-02cc4d62a6af\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.324015 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.331532 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8xpth" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.331758 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ppdxv" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.339205 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.350943 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.355191 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.359206 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.362234 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-4b48q" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.369432 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm6kz\" (UniqueName: \"kubernetes.io/projected/d4966512-f477-492e-aec2-7a0131c9ae11-kube-api-access-jm6kz\") pod \"barbican-operator-controller-manager-7d9dfd778-6qzxb\" (UID: \"d4966512-f477-492e-aec2-7a0131c9ae11\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.372019 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82xzf\" (UniqueName: \"kubernetes.io/projected/552497de-0304-488e-be9a-d9eec00f697f-kube-api-access-82xzf\") pod \"designate-operator-controller-manager-78b4bc895b-8hc88\" (UID: \"552497de-0304-488e-be9a-d9eec00f697f\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.375151 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbhzf\" (UniqueName: \"kubernetes.io/projected/2f40c1c9-64e3-4473-922e-02cc4d62a6af-kube-api-access-qbhzf\") pod \"cinder-operator-controller-manager-859b6ccc6-wxpdg\" (UID: \"2f40c1c9-64e3-4473-922e-02cc4d62a6af\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.402428 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.403462 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.407117 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.412598 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-f6szj" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.421647 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.426201 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9nfx\" (UniqueName: \"kubernetes.io/projected/ea530c4b-8fc7-407e-bfe5-d8e4957360ea-kube-api-access-t9nfx\") pod \"keystone-operator-controller-manager-546d4bdf48-dllfg\" (UID: \"ea530c4b-8fc7-407e-bfe5-d8e4957360ea\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.426332 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2krml\" (UniqueName: \"kubernetes.io/projected/c77afc81-c86a-48e4-acae-861622a56981-kube-api-access-2krml\") pod \"heat-operator-controller-manager-5f64f6f8bb-2pmz8\" (UID: \"c77afc81-c86a-48e4-acae-861622a56981\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.426424 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8p6d\" (UniqueName: \"kubernetes.io/projected/10e31949-6791-42f4-ae62-c24c46fef261-kube-api-access-z8p6d\") pod \"glance-operator-controller-manager-668d9c48b9-sr86h\" (UID: \"10e31949-6791-42f4-ae62-c24c46fef261\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.426501 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvrtw\" (UniqueName: \"kubernetes.io/projected/31582323-a6b6-4ef3-8d00-01fb3a3d28f2-kube-api-access-vvrtw\") pod \"ironic-operator-controller-manager-6c548fd776-lbhh8\" (UID: \"31582323-a6b6-4ef3-8d00-01fb3a3d28f2\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.426587 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqcq8\" (UniqueName: \"kubernetes.io/projected/993f5873-6998-453d-85a0-28e87d22380a-kube-api-access-lqcq8\") pod \"horizon-operator-controller-manager-68c6d99b8f-rpjrm\" (UID: \"993f5873-6998-453d-85a0-28e87d22380a\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.426658 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.426730 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x975\" (UniqueName: \"kubernetes.io/projected/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-kube-api-access-4x975\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.427259 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.430999 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.444898 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.457147 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.469340 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.475893 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.477066 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.496445 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tr65x" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.497772 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8p6d\" (UniqueName: \"kubernetes.io/projected/10e31949-6791-42f4-ae62-c24c46fef261-kube-api-access-z8p6d\") pod \"glance-operator-controller-manager-668d9c48b9-sr86h\" (UID: \"10e31949-6791-42f4-ae62-c24c46fef261\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.505647 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqcq8\" (UniqueName: \"kubernetes.io/projected/993f5873-6998-453d-85a0-28e87d22380a-kube-api-access-lqcq8\") pod \"horizon-operator-controller-manager-68c6d99b8f-rpjrm\" (UID: \"993f5873-6998-453d-85a0-28e87d22380a\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.526352 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.530568 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9nfx\" (UniqueName: \"kubernetes.io/projected/ea530c4b-8fc7-407e-bfe5-d8e4957360ea-kube-api-access-t9nfx\") pod \"keystone-operator-controller-manager-546d4bdf48-dllfg\" (UID: \"ea530c4b-8fc7-407e-bfe5-d8e4957360ea\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.530607 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk47r\" (UniqueName: \"kubernetes.io/projected/9a675dd8-d96e-4148-aa62-ae93aec9cb85-kube-api-access-bk47r\") pod \"manila-operator-controller-manager-6546668bfd-d5vnm\" (UID: \"9a675dd8-d96e-4148-aa62-ae93aec9cb85\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.530626 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2krml\" (UniqueName: \"kubernetes.io/projected/c77afc81-c86a-48e4-acae-861622a56981-kube-api-access-2krml\") pod \"heat-operator-controller-manager-5f64f6f8bb-2pmz8\" (UID: \"c77afc81-c86a-48e4-acae-861622a56981\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.530660 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvrtw\" (UniqueName: \"kubernetes.io/projected/31582323-a6b6-4ef3-8d00-01fb3a3d28f2-kube-api-access-vvrtw\") pod \"ironic-operator-controller-manager-6c548fd776-lbhh8\" (UID: \"31582323-a6b6-4ef3-8d00-01fb3a3d28f2\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.530700 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.530718 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x975\" (UniqueName: \"kubernetes.io/projected/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-kube-api-access-4x975\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:16 crc kubenswrapper[4880]: E1201 03:10:16.531137 4880 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:16 crc kubenswrapper[4880]: E1201 03:10:16.531177 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert podName:2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a nodeName:}" failed. No retries permitted until 2025-12-01 03:10:17.03116332 +0000 UTC m=+846.542417692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert") pod "infra-operator-controller-manager-57548d458d-8z9hv" (UID: "2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a") : secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.548097 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.549155 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.553736 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r2pzf" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.565921 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.567080 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.583114 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-zrrh8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.583551 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.584477 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.587987 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-sx67x" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.589494 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.590742 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9nfx\" (UniqueName: \"kubernetes.io/projected/ea530c4b-8fc7-407e-bfe5-d8e4957360ea-kube-api-access-t9nfx\") pod \"keystone-operator-controller-manager-546d4bdf48-dllfg\" (UID: \"ea530c4b-8fc7-407e-bfe5-d8e4957360ea\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.598735 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.601038 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvrtw\" (UniqueName: \"kubernetes.io/projected/31582323-a6b6-4ef3-8d00-01fb3a3d28f2-kube-api-access-vvrtw\") pod \"ironic-operator-controller-manager-6c548fd776-lbhh8\" (UID: \"31582323-a6b6-4ef3-8d00-01fb3a3d28f2\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.601097 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2krml\" (UniqueName: \"kubernetes.io/projected/c77afc81-c86a-48e4-acae-861622a56981-kube-api-access-2krml\") pod \"heat-operator-controller-manager-5f64f6f8bb-2pmz8\" (UID: \"c77afc81-c86a-48e4-acae-861622a56981\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.604144 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x975\" (UniqueName: \"kubernetes.io/projected/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-kube-api-access-4x975\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.614070 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.620010 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.631766 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk47r\" (UniqueName: \"kubernetes.io/projected/9a675dd8-d96e-4148-aa62-ae93aec9cb85-kube-api-access-bk47r\") pod \"manila-operator-controller-manager-6546668bfd-d5vnm\" (UID: \"9a675dd8-d96e-4148-aa62-ae93aec9cb85\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.632004 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhvt6\" (UniqueName: \"kubernetes.io/projected/4fbd68da-2a71-4a5d-a2aa-d44a108c7323-kube-api-access-qhvt6\") pod \"mariadb-operator-controller-manager-56bbcc9d85-fnq86\" (UID: \"4fbd68da-2a71-4a5d-a2aa-d44a108c7323\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.632091 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq54n\" (UniqueName: \"kubernetes.io/projected/ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c-kube-api-access-zq54n\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-vvt67\" (UID: \"ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.632193 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sw85\" (UniqueName: \"kubernetes.io/projected/c2db9222-3278-42fc-860c-bcdadff99aa3-kube-api-access-9sw85\") pod \"nova-operator-controller-manager-697bc559fc-94tz5\" (UID: \"c2db9222-3278-42fc-860c-bcdadff99aa3\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.644857 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.651969 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.653384 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.655214 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6zrff" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.655703 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.696027 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk47r\" (UniqueName: \"kubernetes.io/projected/9a675dd8-d96e-4148-aa62-ae93aec9cb85-kube-api-access-bk47r\") pod \"manila-operator-controller-manager-6546668bfd-d5vnm\" (UID: \"9a675dd8-d96e-4148-aa62-ae93aec9cb85\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.696260 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.752465 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.754127 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.754220 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.762695 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5xb7g" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.772197 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhvt6\" (UniqueName: \"kubernetes.io/projected/4fbd68da-2a71-4a5d-a2aa-d44a108c7323-kube-api-access-qhvt6\") pod \"mariadb-operator-controller-manager-56bbcc9d85-fnq86\" (UID: \"4fbd68da-2a71-4a5d-a2aa-d44a108c7323\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.772264 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq54n\" (UniqueName: \"kubernetes.io/projected/ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c-kube-api-access-zq54n\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-vvt67\" (UID: \"ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.772356 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7r8b\" (UniqueName: \"kubernetes.io/projected/7aafd3a8-2df9-4639-b657-a39a2e915e78-kube-api-access-m7r8b\") pod \"octavia-operator-controller-manager-998648c74-tm8qf\" (UID: \"7aafd3a8-2df9-4639-b657-a39a2e915e78\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.772393 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sw85\" (UniqueName: \"kubernetes.io/projected/c2db9222-3278-42fc-860c-bcdadff99aa3-kube-api-access-9sw85\") pod \"nova-operator-controller-manager-697bc559fc-94tz5\" (UID: \"c2db9222-3278-42fc-860c-bcdadff99aa3\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.772511 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.772647 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbgr\" (UniqueName: \"kubernetes.io/projected/a7e64f8e-2446-4209-9f33-94696bf7d9ae-kube-api-access-pjbgr\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.778319 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.794132 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.797376 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.825426 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sw85\" (UniqueName: \"kubernetes.io/projected/c2db9222-3278-42fc-860c-bcdadff99aa3-kube-api-access-9sw85\") pod \"nova-operator-controller-manager-697bc559fc-94tz5\" (UID: \"c2db9222-3278-42fc-860c-bcdadff99aa3\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.826289 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq54n\" (UniqueName: \"kubernetes.io/projected/ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c-kube-api-access-zq54n\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-vvt67\" (UID: \"ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.838201 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhvt6\" (UniqueName: \"kubernetes.io/projected/4fbd68da-2a71-4a5d-a2aa-d44a108c7323-kube-api-access-qhvt6\") pod \"mariadb-operator-controller-manager-56bbcc9d85-fnq86\" (UID: \"4fbd68da-2a71-4a5d-a2aa-d44a108c7323\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.886285 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7r8b\" (UniqueName: \"kubernetes.io/projected/7aafd3a8-2df9-4639-b657-a39a2e915e78-kube-api-access-m7r8b\") pod \"octavia-operator-controller-manager-998648c74-tm8qf\" (UID: \"7aafd3a8-2df9-4639-b657-a39a2e915e78\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.886441 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjfw9\" (UniqueName: \"kubernetes.io/projected/c99af8a4-02e4-405f-bc7c-d4647a6db0d6-kube-api-access-fjfw9\") pod \"ovn-operator-controller-manager-b6456fdb6-l7pfx\" (UID: \"c99af8a4-02e4-405f-bc7c-d4647a6db0d6\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.886489 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.886559 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbgr\" (UniqueName: \"kubernetes.io/projected/a7e64f8e-2446-4209-9f33-94696bf7d9ae-kube-api-access-pjbgr\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:16 crc kubenswrapper[4880]: E1201 03:10:16.887330 4880 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:16 crc kubenswrapper[4880]: E1201 03:10:16.887378 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert podName:a7e64f8e-2446-4209-9f33-94696bf7d9ae nodeName:}" failed. No retries permitted until 2025-12-01 03:10:17.387364847 +0000 UTC m=+846.898619219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" (UID: "a7e64f8e-2446-4209-9f33-94696bf7d9ae") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.898280 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-26hsz"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.901368 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.919389 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.923480 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.929424 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nsrgq" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.942092 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.949396 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbgr\" (UniqueName: \"kubernetes.io/projected/a7e64f8e-2446-4209-9f33-94696bf7d9ae-kube-api-access-pjbgr\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.952184 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7r8b\" (UniqueName: \"kubernetes.io/projected/7aafd3a8-2df9-4639-b657-a39a2e915e78-kube-api-access-m7r8b\") pod \"octavia-operator-controller-manager-998648c74-tm8qf\" (UID: \"7aafd3a8-2df9-4639-b657-a39a2e915e78\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.980302 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m"] Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.982091 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.988668 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjfw9\" (UniqueName: \"kubernetes.io/projected/c99af8a4-02e4-405f-bc7c-d4647a6db0d6-kube-api-access-fjfw9\") pod \"ovn-operator-controller-manager-b6456fdb6-l7pfx\" (UID: \"c99af8a4-02e4-405f-bc7c-d4647a6db0d6\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" Dec 01 03:10:16 crc kubenswrapper[4880]: I1201 03:10:16.999169 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rw87k" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.015637 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-26hsz"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.029144 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.036742 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjfw9\" (UniqueName: \"kubernetes.io/projected/c99af8a4-02e4-405f-bc7c-d4647a6db0d6-kube-api-access-fjfw9\") pod \"ovn-operator-controller-manager-b6456fdb6-l7pfx\" (UID: \"c99af8a4-02e4-405f-bc7c-d4647a6db0d6\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.074890 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.075964 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.086564 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-lbhzv" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.094156 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d46vk\" (UniqueName: \"kubernetes.io/projected/22dffe57-f99a-48e7-840c-e7f29f399e50-kube-api-access-d46vk\") pod \"swift-operator-controller-manager-5f8c65bbfc-vb84m\" (UID: \"22dffe57-f99a-48e7-840c-e7f29f399e50\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.094209 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdpq2\" (UniqueName: \"kubernetes.io/projected/33a7c344-b7ab-48cb-bd7f-88e991a56ee3-kube-api-access-fdpq2\") pod \"placement-operator-controller-manager-78f8948974-26hsz\" (UID: \"33a7c344-b7ab-48cb-bd7f-88e991a56ee3\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.094257 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:17 crc kubenswrapper[4880]: E1201 03:10:17.094400 4880 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:17 crc kubenswrapper[4880]: E1201 03:10:17.094437 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert podName:2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a nodeName:}" failed. No retries permitted until 2025-12-01 03:10:18.094423309 +0000 UTC m=+847.605677671 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert") pod "infra-operator-controller-manager-57548d458d-8z9hv" (UID: "2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a") : secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.100934 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-29k9v"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.102152 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.107197 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ggsxf" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.114137 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.130248 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-29k9v"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.137276 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.150049 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.152031 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.153419 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.166196 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-kzh6h" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.166357 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.198132 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjd95\" (UniqueName: \"kubernetes.io/projected/3c1771bd-575e-4ccd-9649-aa9cf2ecda71-kube-api-access-cjd95\") pod \"telemetry-operator-controller-manager-76cc84c6bb-shn7t\" (UID: \"3c1771bd-575e-4ccd-9649-aa9cf2ecda71\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.198183 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d46vk\" (UniqueName: \"kubernetes.io/projected/22dffe57-f99a-48e7-840c-e7f29f399e50-kube-api-access-d46vk\") pod \"swift-operator-controller-manager-5f8c65bbfc-vb84m\" (UID: \"22dffe57-f99a-48e7-840c-e7f29f399e50\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.198220 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fh8\" (UniqueName: \"kubernetes.io/projected/c69469ac-293e-422a-9399-d61c09fe4774-kube-api-access-96fh8\") pod \"test-operator-controller-manager-5854674fcc-29k9v\" (UID: \"c69469ac-293e-422a-9399-d61c09fe4774\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.198251 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdpq2\" (UniqueName: \"kubernetes.io/projected/33a7c344-b7ab-48cb-bd7f-88e991a56ee3-kube-api-access-fdpq2\") pod \"placement-operator-controller-manager-78f8948974-26hsz\" (UID: \"33a7c344-b7ab-48cb-bd7f-88e991a56ee3\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.199848 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.208348 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.216182 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.222337 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-r8s9v" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.231663 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdpq2\" (UniqueName: \"kubernetes.io/projected/33a7c344-b7ab-48cb-bd7f-88e991a56ee3-kube-api-access-fdpq2\") pod \"placement-operator-controller-manager-78f8948974-26hsz\" (UID: \"33a7c344-b7ab-48cb-bd7f-88e991a56ee3\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.231788 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.238980 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.239889 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.240019 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.240228 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.271255 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.274413 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wpdgj" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.277914 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.286469 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d46vk\" (UniqueName: \"kubernetes.io/projected/22dffe57-f99a-48e7-840c-e7f29f399e50-kube-api-access-d46vk\") pod \"swift-operator-controller-manager-5f8c65bbfc-vb84m\" (UID: \"22dffe57-f99a-48e7-840c-e7f29f399e50\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.300498 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96fh8\" (UniqueName: \"kubernetes.io/projected/c69469ac-293e-422a-9399-d61c09fe4774-kube-api-access-96fh8\") pod \"test-operator-controller-manager-5854674fcc-29k9v\" (UID: \"c69469ac-293e-422a-9399-d61c09fe4774\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.300607 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldg4s\" (UniqueName: \"kubernetes.io/projected/d600d81b-97bf-4127-87f6-46d0258b4eea-kube-api-access-ldg4s\") pod \"watcher-operator-controller-manager-769dc69bc-rdncc\" (UID: \"d600d81b-97bf-4127-87f6-46d0258b4eea\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.300652 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjd95\" (UniqueName: \"kubernetes.io/projected/3c1771bd-575e-4ccd-9649-aa9cf2ecda71-kube-api-access-cjd95\") pod \"telemetry-operator-controller-manager-76cc84c6bb-shn7t\" (UID: \"3c1771bd-575e-4ccd-9649-aa9cf2ecda71\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.324467 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.338435 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjd95\" (UniqueName: \"kubernetes.io/projected/3c1771bd-575e-4ccd-9649-aa9cf2ecda71-kube-api-access-cjd95\") pod \"telemetry-operator-controller-manager-76cc84c6bb-shn7t\" (UID: \"3c1771bd-575e-4ccd-9649-aa9cf2ecda71\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.341602 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96fh8\" (UniqueName: \"kubernetes.io/projected/c69469ac-293e-422a-9399-d61c09fe4774-kube-api-access-96fh8\") pod \"test-operator-controller-manager-5854674fcc-29k9v\" (UID: \"c69469ac-293e-422a-9399-d61c09fe4774\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.410535 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbw8h\" (UniqueName: \"kubernetes.io/projected/b627ae1c-571f-4009-9c66-65e2d05777f4-kube-api-access-lbw8h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ffctz\" (UID: \"b627ae1c-571f-4009-9c66-65e2d05777f4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.410608 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmkp4\" (UniqueName: \"kubernetes.io/projected/f29c2d86-0f01-49d3-a040-74b70269010d-kube-api-access-jmkp4\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.410641 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.410662 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldg4s\" (UniqueName: \"kubernetes.io/projected/d600d81b-97bf-4127-87f6-46d0258b4eea-kube-api-access-ldg4s\") pod \"watcher-operator-controller-manager-769dc69bc-rdncc\" (UID: \"d600d81b-97bf-4127-87f6-46d0258b4eea\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.410691 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.410731 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:17 crc kubenswrapper[4880]: E1201 03:10:17.422656 4880 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:17 crc kubenswrapper[4880]: E1201 03:10:17.422742 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert podName:a7e64f8e-2446-4209-9f33-94696bf7d9ae nodeName:}" failed. No retries permitted until 2025-12-01 03:10:18.422722819 +0000 UTC m=+847.933977191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" (UID: "a7e64f8e-2446-4209-9f33-94696bf7d9ae") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.432254 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldg4s\" (UniqueName: \"kubernetes.io/projected/d600d81b-97bf-4127-87f6-46d0258b4eea-kube-api-access-ldg4s\") pod \"watcher-operator-controller-manager-769dc69bc-rdncc\" (UID: \"d600d81b-97bf-4127-87f6-46d0258b4eea\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.467166 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.508246 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.512109 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbw8h\" (UniqueName: \"kubernetes.io/projected/b627ae1c-571f-4009-9c66-65e2d05777f4-kube-api-access-lbw8h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ffctz\" (UID: \"b627ae1c-571f-4009-9c66-65e2d05777f4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.512236 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmkp4\" (UniqueName: \"kubernetes.io/projected/f29c2d86-0f01-49d3-a040-74b70269010d-kube-api-access-jmkp4\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.512370 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.512451 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:17 crc kubenswrapper[4880]: E1201 03:10:17.512682 4880 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 03:10:17 crc kubenswrapper[4880]: E1201 03:10:17.512892 4880 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 03:10:17 crc kubenswrapper[4880]: E1201 03:10:17.512916 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:18.012743727 +0000 UTC m=+847.523998099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "metrics-server-cert" not found Dec 01 03:10:17 crc kubenswrapper[4880]: E1201 03:10:17.512944 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:18.012927642 +0000 UTC m=+847.524182014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "webhook-server-cert" not found Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.522091 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.543940 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.544187 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmkp4\" (UniqueName: \"kubernetes.io/projected/f29c2d86-0f01-49d3-a040-74b70269010d-kube-api-access-jmkp4\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.548217 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbw8h\" (UniqueName: \"kubernetes.io/projected/b627ae1c-571f-4009-9c66-65e2d05777f4-kube-api-access-lbw8h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ffctz\" (UID: \"b627ae1c-571f-4009-9c66-65e2d05777f4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.586605 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.592566 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.594701 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.643727 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8"] Dec 01 03:10:17 crc kubenswrapper[4880]: W1201 03:10:17.680331 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552497de_0304_488e_be9a_d9eec00f697f.slice/crio-6e4259492507afa81ecf0376606f28ebbb8e2f8f29de25804a7b19a80bd5cd5c WatchSource:0}: Error finding container 6e4259492507afa81ecf0376606f28ebbb8e2f8f29de25804a7b19a80bd5cd5c: Status 404 returned error can't find the container with id 6e4259492507afa81ecf0376606f28ebbb8e2f8f29de25804a7b19a80bd5cd5c Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.928245 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.939743 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm"] Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.975144 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" event={"ID":"c77afc81-c86a-48e4-acae-861622a56981","Type":"ContainerStarted","Data":"644a226c031603ff50395eb89753a7e477e807b50ce227b40033412471bf2e53"} Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.977230 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" event={"ID":"2f40c1c9-64e3-4473-922e-02cc4d62a6af","Type":"ContainerStarted","Data":"d1a4cb239f4cf112251f5a8fe11a7e1655ed108e9256bd09eb9b2d6d2a1a792f"} Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.978993 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" event={"ID":"552497de-0304-488e-be9a-d9eec00f697f","Type":"ContainerStarted","Data":"6e4259492507afa81ecf0376606f28ebbb8e2f8f29de25804a7b19a80bd5cd5c"} Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.979814 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" event={"ID":"993f5873-6998-453d-85a0-28e87d22380a","Type":"ContainerStarted","Data":"f622a7df97f99f40ef34221d5e00cf4d4c79da36eaca5a439367b9f2d42a1b40"} Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.982123 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" event={"ID":"d4966512-f477-492e-aec2-7a0131c9ae11","Type":"ContainerStarted","Data":"3659080fca5895c850bc36380b083d0ff35bdd72fe543f5c71401b5c6c12c177"} Dec 01 03:10:17 crc kubenswrapper[4880]: I1201 03:10:17.982984 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" event={"ID":"9a675dd8-d96e-4148-aa62-ae93aec9cb85","Type":"ContainerStarted","Data":"df11f83262d37b8bdadd8a0a4e37cb0f47d96b7e51b976d590711a3d1074c7c0"} Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.040036 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.040100 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.040174 4880 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.040246 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:19.040227502 +0000 UTC m=+848.551481874 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "webhook-server-cert" not found Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.040297 4880 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.040359 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:19.040340074 +0000 UTC m=+848.551594516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "metrics-server-cert" not found Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.138127 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg"] Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.139807 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8"] Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.141437 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.141718 4880 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.141769 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert podName:2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a nodeName:}" failed. No retries permitted until 2025-12-01 03:10:20.141753718 +0000 UTC m=+849.653008090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert") pod "infra-operator-controller-manager-57548d458d-8z9hv" (UID: "2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a") : secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.184987 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx"] Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.316804 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h"] Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.324242 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5"] Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.340096 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-26hsz"] Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.345566 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67"] Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.353864 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86"] Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.362097 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zq54n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-vvt67_openstack-operators(ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.365468 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7r8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tm8qf_openstack-operators(7aafd3a8-2df9-4639-b657-a39a2e915e78): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.365637 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zq54n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-vvt67_openstack-operators(ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.369965 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" podUID="ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c" Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.370637 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf"] Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.376132 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7r8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tm8qf_openstack-operators(7aafd3a8-2df9-4639-b657-a39a2e915e78): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.377851 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" podUID="7aafd3a8-2df9-4639-b657-a39a2e915e78" Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.378435 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m"] Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.385733 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d46vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-vb84m_openstack-operators(22dffe57-f99a-48e7-840c-e7f29f399e50): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.387881 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d46vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-vb84m_openstack-operators(22dffe57-f99a-48e7-840c-e7f29f399e50): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.398138 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" podUID="22dffe57-f99a-48e7-840c-e7f29f399e50" Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.447506 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.447672 4880 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.447714 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert podName:a7e64f8e-2446-4209-9f33-94696bf7d9ae nodeName:}" failed. No retries permitted until 2025-12-01 03:10:20.447700899 +0000 UTC m=+849.958955271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" (UID: "a7e64f8e-2446-4209-9f33-94696bf7d9ae") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.490653 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t"] Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.496398 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc"] Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.515058 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-29k9v"] Dec 01 03:10:18 crc kubenswrapper[4880]: W1201 03:10:18.533152 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69469ac_293e_422a_9399_d61c09fe4774.slice/crio-a088e288f4330d4e8991e2779867d4e41b93f3cef7c826a50a58db0cac23190e WatchSource:0}: Error finding container a088e288f4330d4e8991e2779867d4e41b93f3cef7c826a50a58db0cac23190e: Status 404 returned error can't find the container with id a088e288f4330d4e8991e2779867d4e41b93f3cef7c826a50a58db0cac23190e Dec 01 03:10:18 crc kubenswrapper[4880]: W1201 03:10:18.533919 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c1771bd_575e_4ccd_9649_aa9cf2ecda71.slice/crio-943e30af306874d24c9b966437571e27380948a284ce7459f98a4ee8396b7b20 WatchSource:0}: Error finding container 943e30af306874d24c9b966437571e27380948a284ce7459f98a4ee8396b7b20: Status 404 returned error can't find the container with id 943e30af306874d24c9b966437571e27380948a284ce7459f98a4ee8396b7b20 Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.535914 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96fh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-29k9v_openstack-operators(c69469ac-293e-422a-9399-d61c09fe4774): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.539497 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjd95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-shn7t_openstack-operators(3c1771bd-575e-4ccd-9649-aa9cf2ecda71): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.540092 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96fh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-29k9v_openstack-operators(c69469ac-293e-422a-9399-d61c09fe4774): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: W1201 03:10:18.540529 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb627ae1c_571f_4009_9c66_65e2d05777f4.slice/crio-a11104833da40b4d156bcc1ebef55d11a602dd0030942ee3f933098c17944b51 WatchSource:0}: Error finding container a11104833da40b4d156bcc1ebef55d11a602dd0030942ee3f933098c17944b51: Status 404 returned error can't find the container with id a11104833da40b4d156bcc1ebef55d11a602dd0030942ee3f933098c17944b51 Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.541188 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" podUID="c69469ac-293e-422a-9399-d61c09fe4774" Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.541957 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz"] Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.542782 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbw8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ffctz_openstack-operators(b627ae1c-571f-4009-9c66-65e2d05777f4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.544070 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" podUID="b627ae1c-571f-4009-9c66-65e2d05777f4" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.545735 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjd95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-shn7t_openstack-operators(3c1771bd-575e-4ccd-9649-aa9cf2ecda71): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 03:10:18 crc kubenswrapper[4880]: E1201 03:10:18.547512 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" podUID="3c1771bd-575e-4ccd-9649-aa9cf2ecda71" Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.995033 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" event={"ID":"c2db9222-3278-42fc-860c-bcdadff99aa3","Type":"ContainerStarted","Data":"394186651de7271b4bcfc2057677cdd1a9c11f06ccb0ff03f7b228fd8f0b720e"} Dec 01 03:10:18 crc kubenswrapper[4880]: I1201 03:10:18.997663 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" event={"ID":"3c1771bd-575e-4ccd-9649-aa9cf2ecda71","Type":"ContainerStarted","Data":"943e30af306874d24c9b966437571e27380948a284ce7459f98a4ee8396b7b20"} Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.003681 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" podUID="3c1771bd-575e-4ccd-9649-aa9cf2ecda71" Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.014316 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" event={"ID":"33a7c344-b7ab-48cb-bd7f-88e991a56ee3","Type":"ContainerStarted","Data":"58abf252101967c95512c7ffd233e4c0a6325f6b5e576cc1aa4740fa9599c367"} Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.034581 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" event={"ID":"c99af8a4-02e4-405f-bc7c-d4647a6db0d6","Type":"ContainerStarted","Data":"c2b65325d33a5ad73747a8a46a4c808e8d46f012ba8b928ba02a052c0409043d"} Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.035791 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" event={"ID":"ea530c4b-8fc7-407e-bfe5-d8e4957360ea","Type":"ContainerStarted","Data":"2467a9605170554114afe71538b3833bdcc46a76edb7c39ae9c544f7af55b4ef"} Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.037216 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" event={"ID":"4fbd68da-2a71-4a5d-a2aa-d44a108c7323","Type":"ContainerStarted","Data":"2d20283c5be6b7df4ec2e4aa915ed1562b8c21a99f426586d7becbd6d68833f6"} Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.039952 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" event={"ID":"ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c","Type":"ContainerStarted","Data":"e96f7eba2d9649639ccd64730077973e7468e8f270d808128f23d1e145a0206e"} Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.041814 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" podUID="ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c" Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.042434 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" event={"ID":"22dffe57-f99a-48e7-840c-e7f29f399e50","Type":"ContainerStarted","Data":"27c4e75177d0174c7208d4583e81b68d5c917722c27cbf9adeb7ecd3a9d0f43a"} Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.046574 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" podUID="22dffe57-f99a-48e7-840c-e7f29f399e50" Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.047285 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" event={"ID":"7aafd3a8-2df9-4639-b657-a39a2e915e78","Type":"ContainerStarted","Data":"5e3e7c079cde48d97b43c1ec8b5d114eb07e0745cd54144b3bdf3db022b3a81a"} Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.050220 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" podUID="7aafd3a8-2df9-4639-b657-a39a2e915e78" Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.050501 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" event={"ID":"31582323-a6b6-4ef3-8d00-01fb3a3d28f2","Type":"ContainerStarted","Data":"badd3a4e2f8b7a7761d4187c6783ef801b9c73d1047aa5ea35e7bdc5226e4bcc"} Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.053284 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" event={"ID":"c69469ac-293e-422a-9399-d61c09fe4774","Type":"ContainerStarted","Data":"a088e288f4330d4e8991e2779867d4e41b93f3cef7c826a50a58db0cac23190e"} Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.054956 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.055183 4880 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.055257 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:21.055239924 +0000 UTC m=+850.566494296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "webhook-server-cert" not found Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.055707 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.057620 4880 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.069108 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" event={"ID":"d600d81b-97bf-4127-87f6-46d0258b4eea","Type":"ContainerStarted","Data":"898faa3118997f7cc84f229efa7df9a42e99c209caf3b477f2d0eb4d995f7757"} Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.078180 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" event={"ID":"b627ae1c-571f-4009-9c66-65e2d05777f4","Type":"ContainerStarted","Data":"a11104833da40b4d156bcc1ebef55d11a602dd0030942ee3f933098c17944b51"} Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.078976 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:21.078951926 +0000 UTC m=+850.590206298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "metrics-server-cert" not found Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.082379 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" podUID="c69469ac-293e-422a-9399-d61c09fe4774" Dec 01 03:10:19 crc kubenswrapper[4880]: I1201 03:10:19.089375 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" event={"ID":"10e31949-6791-42f4-ae62-c24c46fef261","Type":"ContainerStarted","Data":"a9a8aa55d972a88cb2cdaaa4983ffd6cdc64def74b9ef8c6cbeceb6cf72d1716"} Dec 01 03:10:19 crc kubenswrapper[4880]: E1201 03:10:19.089529 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" podUID="b627ae1c-571f-4009-9c66-65e2d05777f4" Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.104184 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" podUID="b627ae1c-571f-4009-9c66-65e2d05777f4" Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.104351 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" podUID="22dffe57-f99a-48e7-840c-e7f29f399e50" Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.105979 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" podUID="3c1771bd-575e-4ccd-9649-aa9cf2ecda71" Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.112468 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" podUID="c69469ac-293e-422a-9399-d61c09fe4774" Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.112528 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" podUID="ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c" Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.112580 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" podUID="7aafd3a8-2df9-4639-b657-a39a2e915e78" Dec 01 03:10:20 crc kubenswrapper[4880]: I1201 03:10:20.199672 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.200045 4880 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.200263 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert podName:2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a nodeName:}" failed. No retries permitted until 2025-12-01 03:10:24.200248253 +0000 UTC m=+853.711502625 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert") pod "infra-operator-controller-manager-57548d458d-8z9hv" (UID: "2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a") : secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:20 crc kubenswrapper[4880]: I1201 03:10:20.503074 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.503489 4880 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:20 crc kubenswrapper[4880]: E1201 03:10:20.503544 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert podName:a7e64f8e-2446-4209-9f33-94696bf7d9ae nodeName:}" failed. No retries permitted until 2025-12-01 03:10:24.503528819 +0000 UTC m=+854.014783191 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" (UID: "a7e64f8e-2446-4209-9f33-94696bf7d9ae") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:21 crc kubenswrapper[4880]: I1201 03:10:21.113057 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:21 crc kubenswrapper[4880]: I1201 03:10:21.113122 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:21 crc kubenswrapper[4880]: E1201 03:10:21.113285 4880 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 03:10:21 crc kubenswrapper[4880]: E1201 03:10:21.113327 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:25.11331365 +0000 UTC m=+854.624568022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "metrics-server-cert" not found Dec 01 03:10:21 crc kubenswrapper[4880]: E1201 03:10:21.113622 4880 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 03:10:21 crc kubenswrapper[4880]: E1201 03:10:21.113646 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:25.113638658 +0000 UTC m=+854.624893020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "webhook-server-cert" not found Dec 01 03:10:24 crc kubenswrapper[4880]: I1201 03:10:24.255950 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:24 crc kubenswrapper[4880]: E1201 03:10:24.256772 4880 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:24 crc kubenswrapper[4880]: E1201 03:10:24.256862 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert podName:2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a nodeName:}" failed. No retries permitted until 2025-12-01 03:10:32.256833856 +0000 UTC m=+861.768088268 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert") pod "infra-operator-controller-manager-57548d458d-8z9hv" (UID: "2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a") : secret "infra-operator-webhook-server-cert" not found Dec 01 03:10:24 crc kubenswrapper[4880]: I1201 03:10:24.562255 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:24 crc kubenswrapper[4880]: E1201 03:10:24.562524 4880 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:24 crc kubenswrapper[4880]: E1201 03:10:24.562593 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert podName:a7e64f8e-2446-4209-9f33-94696bf7d9ae nodeName:}" failed. No retries permitted until 2025-12-01 03:10:32.562571362 +0000 UTC m=+862.073825754 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" (UID: "a7e64f8e-2446-4209-9f33-94696bf7d9ae") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 03:10:25 crc kubenswrapper[4880]: I1201 03:10:25.169615 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:25 crc kubenswrapper[4880]: I1201 03:10:25.169686 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:25 crc kubenswrapper[4880]: E1201 03:10:25.169786 4880 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 03:10:25 crc kubenswrapper[4880]: E1201 03:10:25.169798 4880 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 03:10:25 crc kubenswrapper[4880]: E1201 03:10:25.169845 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:33.16982791 +0000 UTC m=+862.681082282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "webhook-server-cert" not found Dec 01 03:10:25 crc kubenswrapper[4880]: E1201 03:10:25.169864 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs podName:f29c2d86-0f01-49d3-a040-74b70269010d nodeName:}" failed. No retries permitted until 2025-12-01 03:10:33.169855911 +0000 UTC m=+862.681110283 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs") pod "openstack-operator-controller-manager-644d6ccc8b-dp9d2" (UID: "f29c2d86-0f01-49d3-a040-74b70269010d") : secret "metrics-server-cert" not found Dec 01 03:10:32 crc kubenswrapper[4880]: I1201 03:10:32.300256 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:32 crc kubenswrapper[4880]: I1201 03:10:32.313275 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a-cert\") pod \"infra-operator-controller-manager-57548d458d-8z9hv\" (UID: \"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:32 crc kubenswrapper[4880]: I1201 03:10:32.567478 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:10:32 crc kubenswrapper[4880]: I1201 03:10:32.607754 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:32 crc kubenswrapper[4880]: I1201 03:10:32.611169 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7e64f8e-2446-4209-9f33-94696bf7d9ae-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl\" (UID: \"a7e64f8e-2446-4209-9f33-94696bf7d9ae\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:32 crc kubenswrapper[4880]: I1201 03:10:32.871562 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:10:33 crc kubenswrapper[4880]: E1201 03:10:33.110318 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172" Dec 01 03:10:33 crc kubenswrapper[4880]: E1201 03:10:33.110481 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:440cde33d3a2a0c545cd1c110a3634eb85544370f448865b97a13c38034b0172,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8p6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-668d9c48b9-sr86h_openstack-operators(10e31949-6791-42f4-ae62-c24c46fef261): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:10:33 crc kubenswrapper[4880]: I1201 03:10:33.217197 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:33 crc kubenswrapper[4880]: I1201 03:10:33.217314 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:33 crc kubenswrapper[4880]: I1201 03:10:33.224595 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-webhook-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:33 crc kubenswrapper[4880]: I1201 03:10:33.236952 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f29c2d86-0f01-49d3-a040-74b70269010d-metrics-certs\") pod \"openstack-operator-controller-manager-644d6ccc8b-dp9d2\" (UID: \"f29c2d86-0f01-49d3-a040-74b70269010d\") " pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:33 crc kubenswrapper[4880]: I1201 03:10:33.471481 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:43 crc kubenswrapper[4880]: E1201 03:10:43.918481 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3" Dec 01 03:10:43 crc kubenswrapper[4880]: E1201 03:10:43.919937 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9nfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-dllfg_openstack-operators(ea530c4b-8fc7-407e-bfe5-d8e4957360ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:10:44 crc kubenswrapper[4880]: E1201 03:10:44.616711 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 01 03:10:44 crc kubenswrapper[4880]: E1201 03:10:44.617205 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sw85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-94tz5_openstack-operators(c2db9222-3278-42fc-860c-bcdadff99aa3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:10:45 crc kubenswrapper[4880]: E1201 03:10:45.207821 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 01 03:10:45 crc kubenswrapper[4880]: E1201 03:10:45.208041 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7r8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tm8qf_openstack-operators(7aafd3a8-2df9-4639-b657-a39a2e915e78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:10:45 crc kubenswrapper[4880]: E1201 03:10:45.739057 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Dec 01 03:10:45 crc kubenswrapper[4880]: E1201 03:10:45.739256 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjd95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-shn7t_openstack-operators(3c1771bd-575e-4ccd-9649-aa9cf2ecda71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:10:49 crc kubenswrapper[4880]: I1201 03:10:49.458581 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2"] Dec 01 03:10:49 crc kubenswrapper[4880]: I1201 03:10:49.565083 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv"] Dec 01 03:10:49 crc kubenswrapper[4880]: I1201 03:10:49.604370 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl"] Dec 01 03:10:49 crc kubenswrapper[4880]: W1201 03:10:49.909425 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf29c2d86_0f01_49d3_a040_74b70269010d.slice/crio-972991773b7668c55e1591a467e872442cc274e371ee7dc4aaed5f5819e4e7ec WatchSource:0}: Error finding container 972991773b7668c55e1591a467e872442cc274e371ee7dc4aaed5f5819e4e7ec: Status 404 returned error can't find the container with id 972991773b7668c55e1591a467e872442cc274e371ee7dc4aaed5f5819e4e7ec Dec 01 03:10:49 crc kubenswrapper[4880]: W1201 03:10:49.912009 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7e64f8e_2446_4209_9f33_94696bf7d9ae.slice/crio-53a855327bff2677e201b4f489d7d5948c126f815371145963e2902b74c78ec6 WatchSource:0}: Error finding container 53a855327bff2677e201b4f489d7d5948c126f815371145963e2902b74c78ec6: Status 404 returned error can't find the container with id 53a855327bff2677e201b4f489d7d5948c126f815371145963e2902b74c78ec6 Dec 01 03:10:49 crc kubenswrapper[4880]: W1201 03:10:49.913775 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b4d6e6e_d5f5_4142_9b08_f99c025bcf8a.slice/crio-db366f37862775fdf9a53eff2637083873b65eca638b8db1daa9e96c9baccb01 WatchSource:0}: Error finding container db366f37862775fdf9a53eff2637083873b65eca638b8db1daa9e96c9baccb01: Status 404 returned error can't find the container with id db366f37862775fdf9a53eff2637083873b65eca638b8db1daa9e96c9baccb01 Dec 01 03:10:50 crc kubenswrapper[4880]: I1201 03:10:50.351017 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" event={"ID":"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a","Type":"ContainerStarted","Data":"db366f37862775fdf9a53eff2637083873b65eca638b8db1daa9e96c9baccb01"} Dec 01 03:10:50 crc kubenswrapper[4880]: I1201 03:10:50.354173 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" event={"ID":"d600d81b-97bf-4127-87f6-46d0258b4eea","Type":"ContainerStarted","Data":"1d4b7986452403bcc3a859c5bfb88926a581bd92d6a96d334b7b53444b760160"} Dec 01 03:10:50 crc kubenswrapper[4880]: I1201 03:10:50.355668 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" event={"ID":"f29c2d86-0f01-49d3-a040-74b70269010d","Type":"ContainerStarted","Data":"972991773b7668c55e1591a467e872442cc274e371ee7dc4aaed5f5819e4e7ec"} Dec 01 03:10:50 crc kubenswrapper[4880]: I1201 03:10:50.357590 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" event={"ID":"552497de-0304-488e-be9a-d9eec00f697f","Type":"ContainerStarted","Data":"44e61f54d947570a6ba1dbcd63ef424778c7abd6ac3c2458cffe3238a6138ccc"} Dec 01 03:10:50 crc kubenswrapper[4880]: I1201 03:10:50.358785 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" event={"ID":"a7e64f8e-2446-4209-9f33-94696bf7d9ae","Type":"ContainerStarted","Data":"53a855327bff2677e201b4f489d7d5948c126f815371145963e2902b74c78ec6"} Dec 01 03:10:51 crc kubenswrapper[4880]: I1201 03:10:51.378742 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" event={"ID":"2f40c1c9-64e3-4473-922e-02cc4d62a6af","Type":"ContainerStarted","Data":"01d813697bcbabc77e81bde2b4d0e08bf5325fb629263ca1be0ef5460cb60b83"} Dec 01 03:10:51 crc kubenswrapper[4880]: I1201 03:10:51.381665 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" event={"ID":"993f5873-6998-453d-85a0-28e87d22380a","Type":"ContainerStarted","Data":"8ee6cb8ceee1c93d184d49806397598ab7a22b144fae4ce9dad7b37efbfa4dfa"} Dec 01 03:10:51 crc kubenswrapper[4880]: I1201 03:10:51.384956 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" event={"ID":"d4966512-f477-492e-aec2-7a0131c9ae11","Type":"ContainerStarted","Data":"1d6669fe8b40607ff389a3f789711f88ab6af84150e3b329a33fb35070a446f5"} Dec 01 03:10:51 crc kubenswrapper[4880]: I1201 03:10:51.391165 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" event={"ID":"31582323-a6b6-4ef3-8d00-01fb3a3d28f2","Type":"ContainerStarted","Data":"e704f315d0789992b1a2fb1130e6cd397a04786c988bb0d6105a63ab8d020259"} Dec 01 03:10:51 crc kubenswrapper[4880]: I1201 03:10:51.394333 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" event={"ID":"c99af8a4-02e4-405f-bc7c-d4647a6db0d6","Type":"ContainerStarted","Data":"e9461015197820e86026f42f028fc50832fe8cc804b73231dc0d40a8b6363856"} Dec 01 03:10:51 crc kubenswrapper[4880]: I1201 03:10:51.400317 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" event={"ID":"c77afc81-c86a-48e4-acae-861622a56981","Type":"ContainerStarted","Data":"25e0fb1ce9515d7a3e66032894ea962129bb752d2b1ff43fb356d7eed2e7a9b6"} Dec 01 03:10:51 crc kubenswrapper[4880]: I1201 03:10:51.414845 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" event={"ID":"4fbd68da-2a71-4a5d-a2aa-d44a108c7323","Type":"ContainerStarted","Data":"2f4ca435e5dcff1fdcc9891d1c3b70fe32a3a2bd72cc5806084c05c855bfd03d"} Dec 01 03:10:51 crc kubenswrapper[4880]: I1201 03:10:51.416777 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" event={"ID":"33a7c344-b7ab-48cb-bd7f-88e991a56ee3","Type":"ContainerStarted","Data":"3855425b9615bd5912e09a8057251dcc3b1cbd1387b878a9a1b9c2e9f4f0506e"} Dec 01 03:10:51 crc kubenswrapper[4880]: I1201 03:10:51.421113 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" event={"ID":"9a675dd8-d96e-4148-aa62-ae93aec9cb85","Type":"ContainerStarted","Data":"5adb7a6e483e675be67537e3627e9dc70d34c8569e92d27370acfc82c4ca6a35"} Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.218538 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xzqzh"] Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.221108 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.243691 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xzqzh"] Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.269472 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-catalog-content\") pod \"certified-operators-xzqzh\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.269594 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-utilities\") pod \"certified-operators-xzqzh\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.269625 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsz86\" (UniqueName: \"kubernetes.io/projected/cc858bde-6dc3-4749-9957-82bdc6768e3b-kube-api-access-hsz86\") pod \"certified-operators-xzqzh\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.371229 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-catalog-content\") pod \"certified-operators-xzqzh\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.371322 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-utilities\") pod \"certified-operators-xzqzh\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.371355 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsz86\" (UniqueName: \"kubernetes.io/projected/cc858bde-6dc3-4749-9957-82bdc6768e3b-kube-api-access-hsz86\") pod \"certified-operators-xzqzh\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.371934 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-catalog-content\") pod \"certified-operators-xzqzh\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.371944 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-utilities\") pod \"certified-operators-xzqzh\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.399679 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsz86\" (UniqueName: \"kubernetes.io/projected/cc858bde-6dc3-4749-9957-82bdc6768e3b-kube-api-access-hsz86\") pod \"certified-operators-xzqzh\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:53 crc kubenswrapper[4880]: I1201 03:10:53.539491 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:10:55 crc kubenswrapper[4880]: I1201 03:10:55.454628 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" event={"ID":"22dffe57-f99a-48e7-840c-e7f29f399e50","Type":"ContainerStarted","Data":"15fa7c2cce0ac5edb7399869b6d3e9abc1511dd62f0649b32d6137332b886ab1"} Dec 01 03:10:55 crc kubenswrapper[4880]: I1201 03:10:55.798007 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7svp6"] Dec 01 03:10:55 crc kubenswrapper[4880]: I1201 03:10:55.799376 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:55 crc kubenswrapper[4880]: I1201 03:10:55.823365 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7svp6"] Dec 01 03:10:55 crc kubenswrapper[4880]: I1201 03:10:55.913284 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-utilities\") pod \"redhat-marketplace-7svp6\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:55 crc kubenswrapper[4880]: I1201 03:10:55.913345 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-catalog-content\") pod \"redhat-marketplace-7svp6\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:55 crc kubenswrapper[4880]: I1201 03:10:55.913364 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcbjk\" (UniqueName: \"kubernetes.io/projected/38306228-98ad-4fd2-ad14-8658436aed42-kube-api-access-dcbjk\") pod \"redhat-marketplace-7svp6\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:56 crc kubenswrapper[4880]: I1201 03:10:56.014605 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-utilities\") pod \"redhat-marketplace-7svp6\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:56 crc kubenswrapper[4880]: I1201 03:10:56.014683 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-catalog-content\") pod \"redhat-marketplace-7svp6\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:56 crc kubenswrapper[4880]: I1201 03:10:56.014710 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcbjk\" (UniqueName: \"kubernetes.io/projected/38306228-98ad-4fd2-ad14-8658436aed42-kube-api-access-dcbjk\") pod \"redhat-marketplace-7svp6\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:56 crc kubenswrapper[4880]: I1201 03:10:56.015645 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-utilities\") pod \"redhat-marketplace-7svp6\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:56 crc kubenswrapper[4880]: I1201 03:10:56.015924 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-catalog-content\") pod \"redhat-marketplace-7svp6\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:56 crc kubenswrapper[4880]: I1201 03:10:56.052207 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcbjk\" (UniqueName: \"kubernetes.io/projected/38306228-98ad-4fd2-ad14-8658436aed42-kube-api-access-dcbjk\") pod \"redhat-marketplace-7svp6\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:56 crc kubenswrapper[4880]: I1201 03:10:56.125891 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.618059 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ql9t8"] Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.620265 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.643616 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-utilities\") pod \"community-operators-ql9t8\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.644228 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-catalog-content\") pod \"community-operators-ql9t8\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.644563 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9857\" (UniqueName: \"kubernetes.io/projected/5aa7c9e0-cb6b-450b-98cc-c641828b5608-kube-api-access-h9857\") pod \"community-operators-ql9t8\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.644923 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ql9t8"] Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.755659 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9857\" (UniqueName: \"kubernetes.io/projected/5aa7c9e0-cb6b-450b-98cc-c641828b5608-kube-api-access-h9857\") pod \"community-operators-ql9t8\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.755756 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-utilities\") pod \"community-operators-ql9t8\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.755786 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-catalog-content\") pod \"community-operators-ql9t8\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.756423 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-catalog-content\") pod \"community-operators-ql9t8\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.756432 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-utilities\") pod \"community-operators-ql9t8\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.775162 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9857\" (UniqueName: \"kubernetes.io/projected/5aa7c9e0-cb6b-450b-98cc-c641828b5608-kube-api-access-h9857\") pod \"community-operators-ql9t8\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:57 crc kubenswrapper[4880]: I1201 03:10:57.973856 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:10:58 crc kubenswrapper[4880]: I1201 03:10:58.107262 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xzqzh"] Dec 01 03:10:58 crc kubenswrapper[4880]: I1201 03:10:58.483482 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" event={"ID":"f29c2d86-0f01-49d3-a040-74b70269010d","Type":"ContainerStarted","Data":"cbe417a7836bd5dea9f0fdce2375cc8d566c6e25fd9dda76ca7b9ebe32ea1569"} Dec 01 03:10:58 crc kubenswrapper[4880]: I1201 03:10:58.483827 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:10:58 crc kubenswrapper[4880]: I1201 03:10:58.485960 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" event={"ID":"ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c","Type":"ContainerStarted","Data":"11d04d84284f7ed75dd2dd0c69dfcb3d25ebbc487ca34539b699c773ba580eea"} Dec 01 03:10:58 crc kubenswrapper[4880]: I1201 03:10:58.528959 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" podStartSLOduration=42.528943295 podStartE2EDuration="42.528943295s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:10:58.518851253 +0000 UTC m=+888.030105635" watchObservedRunningTime="2025-12-01 03:10:58.528943295 +0000 UTC m=+888.040197667" Dec 01 03:10:59 crc kubenswrapper[4880]: I1201 03:10:59.476416 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7svp6"] Dec 01 03:10:59 crc kubenswrapper[4880]: I1201 03:10:59.499792 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" event={"ID":"c69469ac-293e-422a-9399-d61c09fe4774","Type":"ContainerStarted","Data":"17efcdbd3ffe5f408e02ff2c4d66eb67e5e786ce0cb3dba25829cc410478c59b"} Dec 01 03:10:59 crc kubenswrapper[4880]: I1201 03:10:59.510557 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzqzh" event={"ID":"cc858bde-6dc3-4749-9957-82bdc6768e3b","Type":"ContainerStarted","Data":"6a41737a34763156e03d4747e72f1631a5f7bf3efa8f5a589e4492f1c5a7d643"} Dec 01 03:10:59 crc kubenswrapper[4880]: I1201 03:10:59.513850 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" event={"ID":"b627ae1c-571f-4009-9c66-65e2d05777f4","Type":"ContainerStarted","Data":"10417aeb680990c5d8ac9cfdc87f00f09a8da43d5aa14a20dc6c3f9461232dbd"} Dec 01 03:10:59 crc kubenswrapper[4880]: I1201 03:10:59.533790 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ffctz" podStartSLOduration=10.515350275 podStartE2EDuration="42.533773774s" podCreationTimestamp="2025-12-01 03:10:17 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.542657911 +0000 UTC m=+848.053912283" lastFinishedPulling="2025-12-01 03:10:50.56108141 +0000 UTC m=+880.072335782" observedRunningTime="2025-12-01 03:10:59.532606784 +0000 UTC m=+889.043861156" watchObservedRunningTime="2025-12-01 03:10:59.533773774 +0000 UTC m=+889.045028146" Dec 01 03:10:59 crc kubenswrapper[4880]: I1201 03:10:59.739303 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ql9t8"] Dec 01 03:10:59 crc kubenswrapper[4880]: W1201 03:10:59.774715 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aa7c9e0_cb6b_450b_98cc_c641828b5608.slice/crio-489e3513f8c46cbe668e24567680e4c61aadac4bec0ea99342b584f92e5efa33 WatchSource:0}: Error finding container 489e3513f8c46cbe668e24567680e4c61aadac4bec0ea99342b584f92e5efa33: Status 404 returned error can't find the container with id 489e3513f8c46cbe668e24567680e4c61aadac4bec0ea99342b584f92e5efa33 Dec 01 03:11:00 crc kubenswrapper[4880]: E1201 03:11:00.282736 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" podUID="10e31949-6791-42f4-ae62-c24c46fef261" Dec 01 03:11:00 crc kubenswrapper[4880]: E1201 03:11:00.323406 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" podUID="ea530c4b-8fc7-407e-bfe5-d8e4957360ea" Dec 01 03:11:00 crc kubenswrapper[4880]: E1201 03:11:00.407342 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" podUID="7aafd3a8-2df9-4639-b657-a39a2e915e78" Dec 01 03:11:00 crc kubenswrapper[4880]: E1201 03:11:00.470406 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" podUID="3c1771bd-575e-4ccd-9649-aa9cf2ecda71" Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.544441 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" event={"ID":"3c1771bd-575e-4ccd-9649-aa9cf2ecda71","Type":"ContainerStarted","Data":"297633cbbc15f748d2d47e566eaaa53a8ef398a79c3f45abbe712a6aab06ac7b"} Dec 01 03:11:00 crc kubenswrapper[4880]: E1201 03:11:00.547191 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" podUID="3c1771bd-575e-4ccd-9649-aa9cf2ecda71" Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.574101 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" event={"ID":"a7e64f8e-2446-4209-9f33-94696bf7d9ae","Type":"ContainerStarted","Data":"429d6a88e1ac05fe4431902e7e4e442c9973c5473e781a1995cb3a0e0f853b6b"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.590133 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" event={"ID":"ea530c4b-8fc7-407e-bfe5-d8e4957360ea","Type":"ContainerStarted","Data":"56e5553b78f4c3d520f3ffbe2f244dc6bfaa31b964bf9af250df23ffd1c48e7b"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.611591 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" event={"ID":"4fbd68da-2a71-4a5d-a2aa-d44a108c7323","Type":"ContainerStarted","Data":"ac3c799f1cc5cfe39cc9b87ef44b0ac70aa73b38504538d3f2527f8783fc01b8"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.614481 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.616805 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.643605 4880 generic.go:334] "Generic (PLEG): container finished" podID="38306228-98ad-4fd2-ad14-8658436aed42" containerID="237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6" exitCode=0 Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.644349 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7svp6" event={"ID":"38306228-98ad-4fd2-ad14-8658436aed42","Type":"ContainerDied","Data":"237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.644658 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7svp6" event={"ID":"38306228-98ad-4fd2-ad14-8658436aed42","Type":"ContainerStarted","Data":"b0b1e51da4aab20670de3edc2eada14059eea6655edeb9ec3bf0637505dba393"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.655438 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" event={"ID":"10e31949-6791-42f4-ae62-c24c46fef261","Type":"ContainerStarted","Data":"aad7fbda2f57afcd48d6fd9758efe8f63006d8eb601fedd991c527b4864ca61b"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.665038 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" event={"ID":"7aafd3a8-2df9-4639-b657-a39a2e915e78","Type":"ContainerStarted","Data":"997bcc16893aaaf8b1a8dfb8e7bc5e0ecd36560afa6ef3e335d923adba57138f"} Dec 01 03:11:00 crc kubenswrapper[4880]: E1201 03:11:00.666457 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" podUID="7aafd3a8-2df9-4639-b657-a39a2e915e78" Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.666538 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fnq86" podStartSLOduration=3.553693284 podStartE2EDuration="44.666527807s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.359493066 +0000 UTC m=+847.870747438" lastFinishedPulling="2025-12-01 03:10:59.472327589 +0000 UTC m=+888.983581961" observedRunningTime="2025-12-01 03:11:00.641584494 +0000 UTC m=+890.152838876" watchObservedRunningTime="2025-12-01 03:11:00.666527807 +0000 UTC m=+890.177782179" Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.682283 4880 generic.go:334] "Generic (PLEG): container finished" podID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerID="0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8" exitCode=0 Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.682334 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzqzh" event={"ID":"cc858bde-6dc3-4749-9957-82bdc6768e3b","Type":"ContainerDied","Data":"0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.691887 4880 generic.go:334] "Generic (PLEG): container finished" podID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerID="375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119" exitCode=0 Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.692082 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql9t8" event={"ID":"5aa7c9e0-cb6b-450b-98cc-c641828b5608","Type":"ContainerDied","Data":"375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.692176 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql9t8" event={"ID":"5aa7c9e0-cb6b-450b-98cc-c641828b5608","Type":"ContainerStarted","Data":"489e3513f8c46cbe668e24567680e4c61aadac4bec0ea99342b584f92e5efa33"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.696760 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" event={"ID":"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a","Type":"ContainerStarted","Data":"359a435fc72787891f841962ede460a3066d86cff3f9856eaa3dfeb847613df0"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.698363 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" event={"ID":"2f40c1c9-64e3-4473-922e-02cc4d62a6af","Type":"ContainerStarted","Data":"32652fee4c573425b45ff9377b03378ba19059f9092699c4895953f172872ad6"} Dec 01 03:11:00 crc kubenswrapper[4880]: I1201 03:11:00.939707 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" podStartSLOduration=2.802448262 podStartE2EDuration="44.93969009s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:17.677579995 +0000 UTC m=+847.188834367" lastFinishedPulling="2025-12-01 03:10:59.814821823 +0000 UTC m=+889.326076195" observedRunningTime="2025-12-01 03:11:00.935784532 +0000 UTC m=+890.447038914" watchObservedRunningTime="2025-12-01 03:11:00.93969009 +0000 UTC m=+890.450944462" Dec 01 03:11:01 crc kubenswrapper[4880]: E1201 03:11:01.144069 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" podUID="c2db9222-3278-42fc-860c-bcdadff99aa3" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.720407 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" event={"ID":"33a7c344-b7ab-48cb-bd7f-88e991a56ee3","Type":"ContainerStarted","Data":"078c0a908dfc4bec2b50329af3f5dcf6af1df8d24bd52c0c606fc6d6f7563cdc"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.721568 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.725126 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" event={"ID":"d4966512-f477-492e-aec2-7a0131c9ae11","Type":"ContainerStarted","Data":"7f82371c7b6b2a581ba3873b297832391b97bee55a5365cc08939bfe82964e2a"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.725489 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.727264 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" event={"ID":"31582323-a6b6-4ef3-8d00-01fb3a3d28f2","Type":"ContainerStarted","Data":"fa319523b04d1066570aee82a794dd38cf163f02cd9f894ea27d8a1fd1baeb77"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.728506 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.729231 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.730477 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" event={"ID":"c2db9222-3278-42fc-860c-bcdadff99aa3","Type":"ContainerStarted","Data":"4060f8f21d29bfcd62566a2f9d5615dc7f032b64a70fc2b2df82a0c73b98ed7b"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.734082 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" event={"ID":"552497de-0304-488e-be9a-d9eec00f697f","Type":"ContainerStarted","Data":"26999b6a3d8606e95483396f7791cb3ee9efafdbcffba00b846259ac1cd5904e"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.734284 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.736901 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.741185 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" event={"ID":"9a675dd8-d96e-4148-aa62-ae93aec9cb85","Type":"ContainerStarted","Data":"855d9509e4d6c180021b5fad022d0b578894b3f3f62b8e0a872bd8186e6cb9e8"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.741369 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.742643 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.745684 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" event={"ID":"c99af8a4-02e4-405f-bc7c-d4647a6db0d6","Type":"ContainerStarted","Data":"7172029135d84fcfa82e4d9277fc93c9f44be6f093458ddad89e7ea96812ebac"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.745881 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.747220 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-26hsz" podStartSLOduration=4.222519308 podStartE2EDuration="45.747206578s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.354999414 +0000 UTC m=+847.866253786" lastFinishedPulling="2025-12-01 03:10:59.879686684 +0000 UTC m=+889.390941056" observedRunningTime="2025-12-01 03:11:01.744389828 +0000 UTC m=+891.255644200" watchObservedRunningTime="2025-12-01 03:11:01.747206578 +0000 UTC m=+891.258460950" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.747320 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.747989 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" event={"ID":"22dffe57-f99a-48e7-840c-e7f29f399e50","Type":"ContainerStarted","Data":"09e3c7ef566856e3af146a085b683639fa246553996a6cab9d7de1dfac60bc0d"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.748262 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.753551 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.755278 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" event={"ID":"2b4d6e6e-d5f5-4142-9b08-f99c025bcf8a","Type":"ContainerStarted","Data":"ba474912130088d58843fe8a2f59c3dc4a41e28fc90a23c2af6652b3b22c0583"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.755828 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.762580 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.762978 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" event={"ID":"d600d81b-97bf-4127-87f6-46d0258b4eea","Type":"ContainerStarted","Data":"7a0c67886ea63e0eddd163bd29857a0a363dc7c698c85fe72918d26858f25b48"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.763519 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.765334 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.772655 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.778107 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" event={"ID":"993f5873-6998-453d-85a0-28e87d22380a","Type":"ContainerStarted","Data":"c745e14b05d522d17fdab5b9410f418972024c59c0ec84f5d188e61ecf7a8df9"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.778962 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.779029 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-8hc88" podStartSLOduration=3.677065236 podStartE2EDuration="45.779018183s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:17.690130308 +0000 UTC m=+847.201384680" lastFinishedPulling="2025-12-01 03:10:59.792083255 +0000 UTC m=+889.303337627" observedRunningTime="2025-12-01 03:11:01.778169462 +0000 UTC m=+891.289423844" watchObservedRunningTime="2025-12-01 03:11:01.779018183 +0000 UTC m=+891.290272555" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.782571 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.789997 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" event={"ID":"c69469ac-293e-422a-9399-d61c09fe4774","Type":"ContainerStarted","Data":"e376215596b4a461d6d3c82270fdc4133867ccc4911f0a2fe04e1f2c9951ea05"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.790336 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.797532 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" event={"ID":"ea1e7b2c-ef0b-4206-b39a-8062a24a3f3c","Type":"ContainerStarted","Data":"09a5dcf75a65cad0aa55095a1234f404074ff94c9ffba53ca1e870fe93e42880"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.798246 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.835432 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-lbhh8" podStartSLOduration=4.122208284 podStartE2EDuration="45.835417252s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.146894976 +0000 UTC m=+847.658149348" lastFinishedPulling="2025-12-01 03:10:59.860103944 +0000 UTC m=+889.371358316" observedRunningTime="2025-12-01 03:11:01.834184191 +0000 UTC m=+891.345438563" watchObservedRunningTime="2025-12-01 03:11:01.835417252 +0000 UTC m=+891.346671624" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.846248 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" event={"ID":"a7e64f8e-2446-4209-9f33-94696bf7d9ae","Type":"ContainerStarted","Data":"9632b01cce736a21c1369b4fe5bcb03a1a55c6bde611158e78cea0274075559c"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.846964 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.874278 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" event={"ID":"c77afc81-c86a-48e4-acae-861622a56981","Type":"ContainerStarted","Data":"2fbfe020b808cb6da43b88e60d73b246c6b00caf195a11d613180134416f5b92"} Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.874324 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.874962 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.875173 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-d5vnm" podStartSLOduration=4.003831927 podStartE2EDuration="45.875154834s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:17.956784148 +0000 UTC m=+847.468038520" lastFinishedPulling="2025-12-01 03:10:59.828107055 +0000 UTC m=+889.339361427" observedRunningTime="2025-12-01 03:11:01.874149019 +0000 UTC m=+891.385403391" watchObservedRunningTime="2025-12-01 03:11:01.875154834 +0000 UTC m=+891.386409206" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.881016 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-wxpdg" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.882697 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" Dec 01 03:11:01 crc kubenswrapper[4880]: I1201 03:11:01.989279 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6qzxb" podStartSLOduration=3.748535321 podStartE2EDuration="45.989247654s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:17.602418387 +0000 UTC m=+847.113672759" lastFinishedPulling="2025-12-01 03:10:59.84313072 +0000 UTC m=+889.354385092" observedRunningTime="2025-12-01 03:11:01.989158222 +0000 UTC m=+891.500412594" watchObservedRunningTime="2025-12-01 03:11:01.989247654 +0000 UTC m=+891.500502026" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.033050 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" podStartSLOduration=4.516281696 podStartE2EDuration="46.033030637s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.361955558 +0000 UTC m=+847.873209930" lastFinishedPulling="2025-12-01 03:10:59.878704509 +0000 UTC m=+889.389958871" observedRunningTime="2025-12-01 03:11:02.020384272 +0000 UTC m=+891.531638644" watchObservedRunningTime="2025-12-01 03:11:02.033030637 +0000 UTC m=+891.544285009" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.050820 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-2pmz8" podStartSLOduration=3.935573442 podStartE2EDuration="46.050799381s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:17.703587654 +0000 UTC m=+847.214842026" lastFinishedPulling="2025-12-01 03:10:59.818813603 +0000 UTC m=+889.330067965" observedRunningTime="2025-12-01 03:11:02.043612352 +0000 UTC m=+891.554866724" watchObservedRunningTime="2025-12-01 03:11:02.050799381 +0000 UTC m=+891.562053753" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.108409 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" podStartSLOduration=36.872198037 podStartE2EDuration="46.10839023s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:49.939092856 +0000 UTC m=+879.450347258" lastFinishedPulling="2025-12-01 03:10:59.175285089 +0000 UTC m=+888.686539451" observedRunningTime="2025-12-01 03:11:02.076171665 +0000 UTC m=+891.587426037" watchObservedRunningTime="2025-12-01 03:11:02.10839023 +0000 UTC m=+891.619644592" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.110780 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" podStartSLOduration=14.678768406 podStartE2EDuration="46.110775649s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.535748108 +0000 UTC m=+848.047002470" lastFinishedPulling="2025-12-01 03:10:49.967755331 +0000 UTC m=+879.479009713" observedRunningTime="2025-12-01 03:11:02.10918063 +0000 UTC m=+891.620435002" watchObservedRunningTime="2025-12-01 03:11:02.110775649 +0000 UTC m=+891.622030021" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.134811 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rdncc" podStartSLOduration=4.733278036 podStartE2EDuration="46.134795739s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.525438281 +0000 UTC m=+848.036692653" lastFinishedPulling="2025-12-01 03:10:59.926955984 +0000 UTC m=+889.438210356" observedRunningTime="2025-12-01 03:11:02.134185944 +0000 UTC m=+891.645440316" watchObservedRunningTime="2025-12-01 03:11:02.134795739 +0000 UTC m=+891.646050111" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.228188 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-vb84m" podStartSLOduration=4.886406311 podStartE2EDuration="46.228161881s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.385580918 +0000 UTC m=+847.896835290" lastFinishedPulling="2025-12-01 03:10:59.727336488 +0000 UTC m=+889.238590860" observedRunningTime="2025-12-01 03:11:02.196845599 +0000 UTC m=+891.708099971" watchObservedRunningTime="2025-12-01 03:11:02.228161881 +0000 UTC m=+891.739416253" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.244202 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-rpjrm" podStartSLOduration=4.316169649 podStartE2EDuration="46.244183842s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:17.950732807 +0000 UTC m=+847.461987179" lastFinishedPulling="2025-12-01 03:10:59.87874701 +0000 UTC m=+889.390001372" observedRunningTime="2025-12-01 03:11:02.240816087 +0000 UTC m=+891.752070459" watchObservedRunningTime="2025-12-01 03:11:02.244183842 +0000 UTC m=+891.755438204" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.275349 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" podStartSLOduration=37.02247464 podStartE2EDuration="46.27533358s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:49.939064205 +0000 UTC m=+879.450318587" lastFinishedPulling="2025-12-01 03:10:59.191923135 +0000 UTC m=+888.703177527" observedRunningTime="2025-12-01 03:11:02.27296722 +0000 UTC m=+891.784221582" watchObservedRunningTime="2025-12-01 03:11:02.27533358 +0000 UTC m=+891.786587952" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.321600 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-l7pfx" podStartSLOduration=4.688789155 podStartE2EDuration="46.321580895s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.185987122 +0000 UTC m=+847.697241494" lastFinishedPulling="2025-12-01 03:10:59.818778862 +0000 UTC m=+889.330033234" observedRunningTime="2025-12-01 03:11:02.320007585 +0000 UTC m=+891.831261957" watchObservedRunningTime="2025-12-01 03:11:02.321580895 +0000 UTC m=+891.832835267" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.880604 4880 generic.go:334] "Generic (PLEG): container finished" podID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerID="7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3" exitCode=0 Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.880659 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql9t8" event={"ID":"5aa7c9e0-cb6b-450b-98cc-c641828b5608","Type":"ContainerDied","Data":"7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3"} Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.882706 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" event={"ID":"ea530c4b-8fc7-407e-bfe5-d8e4957360ea","Type":"ContainerStarted","Data":"c91450b4a98e1684e2466758452cfb06737124703be00b77eae2b31e18607990"} Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.882830 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.884438 4880 generic.go:334] "Generic (PLEG): container finished" podID="38306228-98ad-4fd2-ad14-8658436aed42" containerID="c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763" exitCode=0 Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.884461 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7svp6" event={"ID":"38306228-98ad-4fd2-ad14-8658436aed42","Type":"ContainerDied","Data":"c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763"} Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.885944 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" event={"ID":"10e31949-6791-42f4-ae62-c24c46fef261","Type":"ContainerStarted","Data":"b91041254726e51297172eef948bb20478f2582504fb0c59026a562279893b4c"} Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.886048 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.887443 4880 generic.go:334] "Generic (PLEG): container finished" podID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerID="dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce" exitCode=0 Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.887478 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzqzh" event={"ID":"cc858bde-6dc3-4749-9957-82bdc6768e3b","Type":"ContainerDied","Data":"dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce"} Dec 01 03:11:02 crc kubenswrapper[4880]: I1201 03:11:02.889714 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vvt67" Dec 01 03:11:03 crc kubenswrapper[4880]: I1201 03:11:03.046378 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" podStartSLOduration=3.798850867 podStartE2EDuration="47.046363428s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.147192153 +0000 UTC m=+847.658446525" lastFinishedPulling="2025-12-01 03:11:01.394704714 +0000 UTC m=+890.905959086" observedRunningTime="2025-12-01 03:11:03.044106932 +0000 UTC m=+892.555361304" watchObservedRunningTime="2025-12-01 03:11:03.046363428 +0000 UTC m=+892.557617800" Dec 01 03:11:03 crc kubenswrapper[4880]: I1201 03:11:03.071175 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" podStartSLOduration=4.112916552 podStartE2EDuration="47.071159367s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.358652565 +0000 UTC m=+847.869906937" lastFinishedPulling="2025-12-01 03:11:01.31689538 +0000 UTC m=+890.828149752" observedRunningTime="2025-12-01 03:11:03.066139352 +0000 UTC m=+892.577393724" watchObservedRunningTime="2025-12-01 03:11:03.071159367 +0000 UTC m=+892.582413729" Dec 01 03:11:03 crc kubenswrapper[4880]: I1201 03:11:03.482305 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-644d6ccc8b-dp9d2" Dec 01 03:11:03 crc kubenswrapper[4880]: I1201 03:11:03.899230 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" event={"ID":"c2db9222-3278-42fc-860c-bcdadff99aa3","Type":"ContainerStarted","Data":"2ef4fd4df11e30721a51aa02404f408c081e5c875d47e855a66c0f145f81016e"} Dec 01 03:11:03 crc kubenswrapper[4880]: I1201 03:11:03.901220 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" Dec 01 03:11:04 crc kubenswrapper[4880]: I1201 03:11:04.907493 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzqzh" event={"ID":"cc858bde-6dc3-4749-9957-82bdc6768e3b","Type":"ContainerStarted","Data":"d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c"} Dec 01 03:11:04 crc kubenswrapper[4880]: I1201 03:11:04.914006 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql9t8" event={"ID":"5aa7c9e0-cb6b-450b-98cc-c641828b5608","Type":"ContainerStarted","Data":"022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0"} Dec 01 03:11:04 crc kubenswrapper[4880]: I1201 03:11:04.916537 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7svp6" event={"ID":"38306228-98ad-4fd2-ad14-8658436aed42","Type":"ContainerStarted","Data":"f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b"} Dec 01 03:11:04 crc kubenswrapper[4880]: I1201 03:11:04.936815 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" podStartSLOduration=4.270344784 podStartE2EDuration="48.936788015s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.358077381 +0000 UTC m=+847.869331753" lastFinishedPulling="2025-12-01 03:11:03.024520612 +0000 UTC m=+892.535774984" observedRunningTime="2025-12-01 03:11:03.921282051 +0000 UTC m=+893.432536423" watchObservedRunningTime="2025-12-01 03:11:04.936788015 +0000 UTC m=+894.448042397" Dec 01 03:11:04 crc kubenswrapper[4880]: I1201 03:11:04.941740 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xzqzh" podStartSLOduration=9.123467477 podStartE2EDuration="11.941729259s" podCreationTimestamp="2025-12-01 03:10:53 +0000 UTC" firstStartedPulling="2025-12-01 03:11:00.686184938 +0000 UTC m=+890.197439310" lastFinishedPulling="2025-12-01 03:11:03.50444672 +0000 UTC m=+893.015701092" observedRunningTime="2025-12-01 03:11:04.931914313 +0000 UTC m=+894.443168695" watchObservedRunningTime="2025-12-01 03:11:04.941729259 +0000 UTC m=+894.452983641" Dec 01 03:11:04 crc kubenswrapper[4880]: I1201 03:11:04.983156 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ql9t8" podStartSLOduration=4.597305365 podStartE2EDuration="7.983138053s" podCreationTimestamp="2025-12-01 03:10:57 +0000 UTC" firstStartedPulling="2025-12-01 03:11:00.694639789 +0000 UTC m=+890.205894161" lastFinishedPulling="2025-12-01 03:11:04.080472477 +0000 UTC m=+893.591726849" observedRunningTime="2025-12-01 03:11:04.961433611 +0000 UTC m=+894.472687983" watchObservedRunningTime="2025-12-01 03:11:04.983138053 +0000 UTC m=+894.494392425" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.126898 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.127219 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.172157 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.192605 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7svp6" podStartSLOduration=8.342180548 podStartE2EDuration="11.192581742s" podCreationTimestamp="2025-12-01 03:10:55 +0000 UTC" firstStartedPulling="2025-12-01 03:11:00.64942776 +0000 UTC m=+890.160682132" lastFinishedPulling="2025-12-01 03:11:03.499828954 +0000 UTC m=+893.011083326" observedRunningTime="2025-12-01 03:11:04.984048836 +0000 UTC m=+894.495303218" watchObservedRunningTime="2025-12-01 03:11:06.192581742 +0000 UTC m=+895.703836134" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.201757 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h6r4c"] Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.203578 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.223107 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6r4c"] Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.270942 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gzpj\" (UniqueName: \"kubernetes.io/projected/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-kube-api-access-2gzpj\") pod \"redhat-operators-h6r4c\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.271024 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-catalog-content\") pod \"redhat-operators-h6r4c\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.271048 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-utilities\") pod \"redhat-operators-h6r4c\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.379453 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-catalog-content\") pod \"redhat-operators-h6r4c\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.379530 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-utilities\") pod \"redhat-operators-h6r4c\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.379687 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gzpj\" (UniqueName: \"kubernetes.io/projected/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-kube-api-access-2gzpj\") pod \"redhat-operators-h6r4c\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.380650 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-catalog-content\") pod \"redhat-operators-h6r4c\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.380926 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-utilities\") pod \"redhat-operators-h6r4c\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.399499 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gzpj\" (UniqueName: \"kubernetes.io/projected/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-kube-api-access-2gzpj\") pod \"redhat-operators-h6r4c\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.517857 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.781609 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-dllfg" Dec 01 03:11:06 crc kubenswrapper[4880]: I1201 03:11:06.800054 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-sr86h" Dec 01 03:11:07 crc kubenswrapper[4880]: I1201 03:11:07.049398 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6r4c"] Dec 01 03:11:07 crc kubenswrapper[4880]: I1201 03:11:07.526069 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-29k9v" Dec 01 03:11:07 crc kubenswrapper[4880]: I1201 03:11:07.940469 4880 generic.go:334] "Generic (PLEG): container finished" podID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerID="f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d" exitCode=0 Dec 01 03:11:07 crc kubenswrapper[4880]: I1201 03:11:07.940567 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6r4c" event={"ID":"9bd4ada0-9a69-427f-a971-ca5d10cf7eab","Type":"ContainerDied","Data":"f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d"} Dec 01 03:11:07 crc kubenswrapper[4880]: I1201 03:11:07.940619 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6r4c" event={"ID":"9bd4ada0-9a69-427f-a971-ca5d10cf7eab","Type":"ContainerStarted","Data":"dfbd630097e06178b0a7f328783d5c6f014483294a711cc44315003f7d93c680"} Dec 01 03:11:07 crc kubenswrapper[4880]: I1201 03:11:07.974786 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:11:07 crc kubenswrapper[4880]: I1201 03:11:07.975188 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:11:08 crc kubenswrapper[4880]: I1201 03:11:08.049261 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:11:09 crc kubenswrapper[4880]: I1201 03:11:09.960611 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6r4c" event={"ID":"9bd4ada0-9a69-427f-a971-ca5d10cf7eab","Type":"ContainerStarted","Data":"ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3"} Dec 01 03:11:10 crc kubenswrapper[4880]: I1201 03:11:10.032550 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:11:10 crc kubenswrapper[4880]: I1201 03:11:10.969295 4880 generic.go:334] "Generic (PLEG): container finished" podID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerID="ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3" exitCode=0 Dec 01 03:11:10 crc kubenswrapper[4880]: I1201 03:11:10.970087 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6r4c" event={"ID":"9bd4ada0-9a69-427f-a971-ca5d10cf7eab","Type":"ContainerDied","Data":"ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3"} Dec 01 03:11:11 crc kubenswrapper[4880]: I1201 03:11:11.994086 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6r4c" event={"ID":"9bd4ada0-9a69-427f-a971-ca5d10cf7eab","Type":"ContainerStarted","Data":"dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2"} Dec 01 03:11:11 crc kubenswrapper[4880]: I1201 03:11:11.998360 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ql9t8"] Dec 01 03:11:11 crc kubenswrapper[4880]: I1201 03:11:11.998640 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ql9t8" podUID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerName="registry-server" containerID="cri-o://022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0" gracePeriod=2 Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.019401 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h6r4c" podStartSLOduration=2.570969287 podStartE2EDuration="6.019382318s" podCreationTimestamp="2025-12-01 03:11:06 +0000 UTC" firstStartedPulling="2025-12-01 03:11:07.942405238 +0000 UTC m=+897.453659610" lastFinishedPulling="2025-12-01 03:11:11.390818269 +0000 UTC m=+900.902072641" observedRunningTime="2025-12-01 03:11:12.018803294 +0000 UTC m=+901.530057676" watchObservedRunningTime="2025-12-01 03:11:12.019382318 +0000 UTC m=+901.530636700" Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.386483 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.419973 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-utilities\") pod \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.420056 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9857\" (UniqueName: \"kubernetes.io/projected/5aa7c9e0-cb6b-450b-98cc-c641828b5608-kube-api-access-h9857\") pod \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.420103 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-catalog-content\") pod \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\" (UID: \"5aa7c9e0-cb6b-450b-98cc-c641828b5608\") " Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.421049 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-utilities" (OuterVolumeSpecName: "utilities") pod "5aa7c9e0-cb6b-450b-98cc-c641828b5608" (UID: "5aa7c9e0-cb6b-450b-98cc-c641828b5608"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.429038 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa7c9e0-cb6b-450b-98cc-c641828b5608-kube-api-access-h9857" (OuterVolumeSpecName: "kube-api-access-h9857") pod "5aa7c9e0-cb6b-450b-98cc-c641828b5608" (UID: "5aa7c9e0-cb6b-450b-98cc-c641828b5608"). InnerVolumeSpecName "kube-api-access-h9857". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.446967 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9857\" (UniqueName: \"kubernetes.io/projected/5aa7c9e0-cb6b-450b-98cc-c641828b5608-kube-api-access-h9857\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.446998 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.476354 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5aa7c9e0-cb6b-450b-98cc-c641828b5608" (UID: "5aa7c9e0-cb6b-450b-98cc-c641828b5608"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.548286 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c9e0-cb6b-450b-98cc-c641828b5608-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.575708 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-8z9hv" Dec 01 03:11:12 crc kubenswrapper[4880]: I1201 03:11:12.878711 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4xpnhl" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.003488 4880 generic.go:334] "Generic (PLEG): container finished" podID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerID="022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0" exitCode=0 Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.003566 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql9t8" event={"ID":"5aa7c9e0-cb6b-450b-98cc-c641828b5608","Type":"ContainerDied","Data":"022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0"} Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.003604 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql9t8" event={"ID":"5aa7c9e0-cb6b-450b-98cc-c641828b5608","Type":"ContainerDied","Data":"489e3513f8c46cbe668e24567680e4c61aadac4bec0ea99342b584f92e5efa33"} Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.003620 4880 scope.go:117] "RemoveContainer" containerID="022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.004730 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql9t8" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.024581 4880 scope.go:117] "RemoveContainer" containerID="7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.036789 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ql9t8"] Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.041638 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ql9t8"] Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.047157 4880 scope.go:117] "RemoveContainer" containerID="375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.077479 4880 scope.go:117] "RemoveContainer" containerID="022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0" Dec 01 03:11:13 crc kubenswrapper[4880]: E1201 03:11:13.078243 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0\": container with ID starting with 022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0 not found: ID does not exist" containerID="022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.078269 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0"} err="failed to get container status \"022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0\": rpc error: code = NotFound desc = could not find container \"022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0\": container with ID starting with 022d29c41955115097b86ca940c920ef70bba6e4402691ae55170788f7c88fc0 not found: ID does not exist" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.078289 4880 scope.go:117] "RemoveContainer" containerID="7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3" Dec 01 03:11:13 crc kubenswrapper[4880]: E1201 03:11:13.078586 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3\": container with ID starting with 7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3 not found: ID does not exist" containerID="7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.078600 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3"} err="failed to get container status \"7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3\": rpc error: code = NotFound desc = could not find container \"7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3\": container with ID starting with 7638b79a93ae60857fc4173ae9e8fd5d3d74d3eb77f6981b543fb8fbe48dcfd3 not found: ID does not exist" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.078611 4880 scope.go:117] "RemoveContainer" containerID="375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119" Dec 01 03:11:13 crc kubenswrapper[4880]: E1201 03:11:13.078825 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119\": container with ID starting with 375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119 not found: ID does not exist" containerID="375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.078849 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119"} err="failed to get container status \"375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119\": rpc error: code = NotFound desc = could not find container \"375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119\": container with ID starting with 375e6e844059a0f54ab9c31737cd17f3116089499f8cbac2f46a7a583ebac119 not found: ID does not exist" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.540216 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.540503 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:11:13 crc kubenswrapper[4880]: I1201 03:11:13.588464 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:11:14 crc kubenswrapper[4880]: I1201 03:11:14.010859 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" event={"ID":"7aafd3a8-2df9-4639-b657-a39a2e915e78","Type":"ContainerStarted","Data":"8cac94420c62904cbe6e32a7e7f8d9b8003c3e1eb9f540defc34c04b0c7af485"} Dec 01 03:11:14 crc kubenswrapper[4880]: I1201 03:11:14.011456 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" Dec 01 03:11:14 crc kubenswrapper[4880]: I1201 03:11:14.013374 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" event={"ID":"3c1771bd-575e-4ccd-9649-aa9cf2ecda71","Type":"ContainerStarted","Data":"05d84aad7688b6fd50ce33a57f4e385f214401d38b9962b766ffe67d54f6f8db"} Dec 01 03:11:14 crc kubenswrapper[4880]: I1201 03:11:14.013600 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" Dec 01 03:11:14 crc kubenswrapper[4880]: I1201 03:11:14.052561 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" podStartSLOduration=3.190936813 podStartE2EDuration="58.052545021s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.365336982 +0000 UTC m=+847.876591354" lastFinishedPulling="2025-12-01 03:11:13.22694517 +0000 UTC m=+902.738199562" observedRunningTime="2025-12-01 03:11:14.049053124 +0000 UTC m=+903.560307506" watchObservedRunningTime="2025-12-01 03:11:14.052545021 +0000 UTC m=+903.563799393" Dec 01 03:11:14 crc kubenswrapper[4880]: I1201 03:11:14.067439 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" podStartSLOduration=3.006814155 podStartE2EDuration="58.067420113s" podCreationTimestamp="2025-12-01 03:10:16 +0000 UTC" firstStartedPulling="2025-12-01 03:10:18.537842131 +0000 UTC m=+848.049096503" lastFinishedPulling="2025-12-01 03:11:13.598448089 +0000 UTC m=+903.109702461" observedRunningTime="2025-12-01 03:11:14.062210963 +0000 UTC m=+903.573465325" watchObservedRunningTime="2025-12-01 03:11:14.067420113 +0000 UTC m=+903.578674485" Dec 01 03:11:14 crc kubenswrapper[4880]: I1201 03:11:14.101729 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:11:14 crc kubenswrapper[4880]: I1201 03:11:14.793741 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" path="/var/lib/kubelet/pods/5aa7c9e0-cb6b-450b-98cc-c641828b5608/volumes" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.198077 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.390041 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xzqzh"] Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.390279 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xzqzh" podUID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerName="registry-server" containerID="cri-o://d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c" gracePeriod=2 Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.519190 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.519224 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.750249 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.798494 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsz86\" (UniqueName: \"kubernetes.io/projected/cc858bde-6dc3-4749-9957-82bdc6768e3b-kube-api-access-hsz86\") pod \"cc858bde-6dc3-4749-9957-82bdc6768e3b\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.798557 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-utilities\") pod \"cc858bde-6dc3-4749-9957-82bdc6768e3b\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.798712 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-catalog-content\") pod \"cc858bde-6dc3-4749-9957-82bdc6768e3b\" (UID: \"cc858bde-6dc3-4749-9957-82bdc6768e3b\") " Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.799774 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-utilities" (OuterVolumeSpecName: "utilities") pod "cc858bde-6dc3-4749-9957-82bdc6768e3b" (UID: "cc858bde-6dc3-4749-9957-82bdc6768e3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.805905 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc858bde-6dc3-4749-9957-82bdc6768e3b-kube-api-access-hsz86" (OuterVolumeSpecName: "kube-api-access-hsz86") pod "cc858bde-6dc3-4749-9957-82bdc6768e3b" (UID: "cc858bde-6dc3-4749-9957-82bdc6768e3b"). InnerVolumeSpecName "kube-api-access-hsz86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.855017 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc858bde-6dc3-4749-9957-82bdc6768e3b" (UID: "cc858bde-6dc3-4749-9957-82bdc6768e3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.900509 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.900536 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc858bde-6dc3-4749-9957-82bdc6768e3b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.900548 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsz86\" (UniqueName: \"kubernetes.io/projected/cc858bde-6dc3-4749-9957-82bdc6768e3b-kube-api-access-hsz86\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:16 crc kubenswrapper[4880]: I1201 03:11:16.922567 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-94tz5" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.039067 4880 generic.go:334] "Generic (PLEG): container finished" podID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerID="d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c" exitCode=0 Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.039107 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzqzh" event={"ID":"cc858bde-6dc3-4749-9957-82bdc6768e3b","Type":"ContainerDied","Data":"d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c"} Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.039160 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzqzh" event={"ID":"cc858bde-6dc3-4749-9957-82bdc6768e3b","Type":"ContainerDied","Data":"6a41737a34763156e03d4747e72f1631a5f7bf3efa8f5a589e4492f1c5a7d643"} Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.039178 4880 scope.go:117] "RemoveContainer" containerID="d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.039176 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzqzh" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.055564 4880 scope.go:117] "RemoveContainer" containerID="dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.072404 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xzqzh"] Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.076064 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xzqzh"] Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.085762 4880 scope.go:117] "RemoveContainer" containerID="0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.106961 4880 scope.go:117] "RemoveContainer" containerID="d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c" Dec 01 03:11:17 crc kubenswrapper[4880]: E1201 03:11:17.107340 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c\": container with ID starting with d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c not found: ID does not exist" containerID="d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.107391 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c"} err="failed to get container status \"d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c\": rpc error: code = NotFound desc = could not find container \"d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c\": container with ID starting with d3ad2dba9af791963a8674eff529b1e6918bd1f13b28b8d69f77befeb1dcbb4c not found: ID does not exist" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.107426 4880 scope.go:117] "RemoveContainer" containerID="dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce" Dec 01 03:11:17 crc kubenswrapper[4880]: E1201 03:11:17.108075 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce\": container with ID starting with dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce not found: ID does not exist" containerID="dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.108114 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce"} err="failed to get container status \"dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce\": rpc error: code = NotFound desc = could not find container \"dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce\": container with ID starting with dd9306c015521b887f195038c61409611d317d084da790ed1528541fd17a36ce not found: ID does not exist" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.108160 4880 scope.go:117] "RemoveContainer" containerID="0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8" Dec 01 03:11:17 crc kubenswrapper[4880]: E1201 03:11:17.108490 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8\": container with ID starting with 0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8 not found: ID does not exist" containerID="0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.108513 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8"} err="failed to get container status \"0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8\": rpc error: code = NotFound desc = could not find container \"0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8\": container with ID starting with 0838d51c4c144d5cad188121f5e2f26e4b1b881769450dd72451231e6b8bf4c8 not found: ID does not exist" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.369410 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.369477 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:11:17 crc kubenswrapper[4880]: I1201 03:11:17.570577 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h6r4c" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerName="registry-server" probeResult="failure" output=< Dec 01 03:11:17 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 03:11:17 crc kubenswrapper[4880]: > Dec 01 03:11:18 crc kubenswrapper[4880]: I1201 03:11:18.594119 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7svp6"] Dec 01 03:11:18 crc kubenswrapper[4880]: I1201 03:11:18.594473 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7svp6" podUID="38306228-98ad-4fd2-ad14-8658436aed42" containerName="registry-server" containerID="cri-o://f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b" gracePeriod=2 Dec 01 03:11:18 crc kubenswrapper[4880]: I1201 03:11:18.794756 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc858bde-6dc3-4749-9957-82bdc6768e3b" path="/var/lib/kubelet/pods/cc858bde-6dc3-4749-9957-82bdc6768e3b/volumes" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.029981 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.063774 4880 generic.go:334] "Generic (PLEG): container finished" podID="38306228-98ad-4fd2-ad14-8658436aed42" containerID="f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b" exitCode=0 Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.063816 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7svp6" event={"ID":"38306228-98ad-4fd2-ad14-8658436aed42","Type":"ContainerDied","Data":"f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b"} Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.063846 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7svp6" event={"ID":"38306228-98ad-4fd2-ad14-8658436aed42","Type":"ContainerDied","Data":"b0b1e51da4aab20670de3edc2eada14059eea6655edeb9ec3bf0637505dba393"} Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.063895 4880 scope.go:117] "RemoveContainer" containerID="f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.064026 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7svp6" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.103355 4880 scope.go:117] "RemoveContainer" containerID="c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.118123 4880 scope.go:117] "RemoveContainer" containerID="237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.133576 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcbjk\" (UniqueName: \"kubernetes.io/projected/38306228-98ad-4fd2-ad14-8658436aed42-kube-api-access-dcbjk\") pod \"38306228-98ad-4fd2-ad14-8658436aed42\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.133659 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-catalog-content\") pod \"38306228-98ad-4fd2-ad14-8658436aed42\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.133746 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-utilities\") pod \"38306228-98ad-4fd2-ad14-8658436aed42\" (UID: \"38306228-98ad-4fd2-ad14-8658436aed42\") " Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.135385 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-utilities" (OuterVolumeSpecName: "utilities") pod "38306228-98ad-4fd2-ad14-8658436aed42" (UID: "38306228-98ad-4fd2-ad14-8658436aed42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.139669 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38306228-98ad-4fd2-ad14-8658436aed42-kube-api-access-dcbjk" (OuterVolumeSpecName: "kube-api-access-dcbjk") pod "38306228-98ad-4fd2-ad14-8658436aed42" (UID: "38306228-98ad-4fd2-ad14-8658436aed42"). InnerVolumeSpecName "kube-api-access-dcbjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.154736 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38306228-98ad-4fd2-ad14-8658436aed42" (UID: "38306228-98ad-4fd2-ad14-8658436aed42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.163518 4880 scope.go:117] "RemoveContainer" containerID="f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b" Dec 01 03:11:19 crc kubenswrapper[4880]: E1201 03:11:19.163894 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b\": container with ID starting with f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b not found: ID does not exist" containerID="f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.163927 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b"} err="failed to get container status \"f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b\": rpc error: code = NotFound desc = could not find container \"f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b\": container with ID starting with f7fab96749358a4bfe54bc44b34520ecd3f43970f2a0472587066972c4a4954b not found: ID does not exist" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.163948 4880 scope.go:117] "RemoveContainer" containerID="c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763" Dec 01 03:11:19 crc kubenswrapper[4880]: E1201 03:11:19.164219 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763\": container with ID starting with c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763 not found: ID does not exist" containerID="c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.164260 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763"} err="failed to get container status \"c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763\": rpc error: code = NotFound desc = could not find container \"c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763\": container with ID starting with c4bf55f33195d8c6b1c9167e71cc005ee236692e84009bc5a968b45e8dc02763 not found: ID does not exist" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.164288 4880 scope.go:117] "RemoveContainer" containerID="237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6" Dec 01 03:11:19 crc kubenswrapper[4880]: E1201 03:11:19.164555 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6\": container with ID starting with 237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6 not found: ID does not exist" containerID="237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.164595 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6"} err="failed to get container status \"237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6\": rpc error: code = NotFound desc = could not find container \"237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6\": container with ID starting with 237903a8eb71a935b4cbc6aef0c946d5c614ad4cfc67f1d52664cafe6ef7e1f6 not found: ID does not exist" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.234421 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.234448 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcbjk\" (UniqueName: \"kubernetes.io/projected/38306228-98ad-4fd2-ad14-8658436aed42-kube-api-access-dcbjk\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.234457 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38306228-98ad-4fd2-ad14-8658436aed42-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.408982 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7svp6"] Dec 01 03:11:19 crc kubenswrapper[4880]: I1201 03:11:19.413509 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7svp6"] Dec 01 03:11:20 crc kubenswrapper[4880]: I1201 03:11:20.799620 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38306228-98ad-4fd2-ad14-8658436aed42" path="/var/lib/kubelet/pods/38306228-98ad-4fd2-ad14-8658436aed42/volumes" Dec 01 03:11:26 crc kubenswrapper[4880]: I1201 03:11:26.595502 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:26 crc kubenswrapper[4880]: I1201 03:11:26.662178 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:27 crc kubenswrapper[4880]: I1201 03:11:27.248812 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tm8qf" Dec 01 03:11:27 crc kubenswrapper[4880]: I1201 03:11:27.398223 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6r4c"] Dec 01 03:11:27 crc kubenswrapper[4880]: I1201 03:11:27.511837 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-shn7t" Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.138934 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h6r4c" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerName="registry-server" containerID="cri-o://dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2" gracePeriod=2 Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.554482 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.576259 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-utilities\") pod \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.576298 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-catalog-content\") pod \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.576335 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gzpj\" (UniqueName: \"kubernetes.io/projected/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-kube-api-access-2gzpj\") pod \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\" (UID: \"9bd4ada0-9a69-427f-a971-ca5d10cf7eab\") " Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.577046 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-utilities" (OuterVolumeSpecName: "utilities") pod "9bd4ada0-9a69-427f-a971-ca5d10cf7eab" (UID: "9bd4ada0-9a69-427f-a971-ca5d10cf7eab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.581589 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-kube-api-access-2gzpj" (OuterVolumeSpecName: "kube-api-access-2gzpj") pod "9bd4ada0-9a69-427f-a971-ca5d10cf7eab" (UID: "9bd4ada0-9a69-427f-a971-ca5d10cf7eab"). InnerVolumeSpecName "kube-api-access-2gzpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.677990 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.678023 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gzpj\" (UniqueName: \"kubernetes.io/projected/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-kube-api-access-2gzpj\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.682261 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bd4ada0-9a69-427f-a971-ca5d10cf7eab" (UID: "9bd4ada0-9a69-427f-a971-ca5d10cf7eab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:11:28 crc kubenswrapper[4880]: I1201 03:11:28.778725 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bd4ada0-9a69-427f-a971-ca5d10cf7eab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.152165 4880 generic.go:334] "Generic (PLEG): container finished" podID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerID="dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2" exitCode=0 Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.152241 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6r4c" event={"ID":"9bd4ada0-9a69-427f-a971-ca5d10cf7eab","Type":"ContainerDied","Data":"dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2"} Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.152286 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6r4c" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.152324 4880 scope.go:117] "RemoveContainer" containerID="dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.152303 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6r4c" event={"ID":"9bd4ada0-9a69-427f-a971-ca5d10cf7eab","Type":"ContainerDied","Data":"dfbd630097e06178b0a7f328783d5c6f014483294a711cc44315003f7d93c680"} Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.192061 4880 scope.go:117] "RemoveContainer" containerID="ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.192839 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6r4c"] Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.205426 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h6r4c"] Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.229475 4880 scope.go:117] "RemoveContainer" containerID="f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.270785 4880 scope.go:117] "RemoveContainer" containerID="dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2" Dec 01 03:11:29 crc kubenswrapper[4880]: E1201 03:11:29.271301 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2\": container with ID starting with dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2 not found: ID does not exist" containerID="dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.271351 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2"} err="failed to get container status \"dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2\": rpc error: code = NotFound desc = could not find container \"dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2\": container with ID starting with dc115d72cbb83b0a2966f2d9803a8c485e572247ed74ff7ba3d0b07933bb76a2 not found: ID does not exist" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.271383 4880 scope.go:117] "RemoveContainer" containerID="ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3" Dec 01 03:11:29 crc kubenswrapper[4880]: E1201 03:11:29.271805 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3\": container with ID starting with ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3 not found: ID does not exist" containerID="ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.271845 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3"} err="failed to get container status \"ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3\": rpc error: code = NotFound desc = could not find container \"ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3\": container with ID starting with ff179ca81040f334d48ee8884d6fcaa6729ad9912171371233c38d56019ee0f3 not found: ID does not exist" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.271890 4880 scope.go:117] "RemoveContainer" containerID="f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d" Dec 01 03:11:29 crc kubenswrapper[4880]: E1201 03:11:29.272424 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d\": container with ID starting with f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d not found: ID does not exist" containerID="f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d" Dec 01 03:11:29 crc kubenswrapper[4880]: I1201 03:11:29.272469 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d"} err="failed to get container status \"f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d\": rpc error: code = NotFound desc = could not find container \"f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d\": container with ID starting with f16d1f2bde2fe3f429f95139023c04791b6f937beb52efdb06e0e99048174f4d not found: ID does not exist" Dec 01 03:11:30 crc kubenswrapper[4880]: I1201 03:11:30.807387 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" path="/var/lib/kubelet/pods/9bd4ada0-9a69-427f-a971-ca5d10cf7eab/volumes" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.806661 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85997758bf-n4wz5"] Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807426 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807442 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807474 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerName="extract-utilities" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807484 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerName="extract-utilities" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807495 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807503 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807524 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerName="extract-content" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807532 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerName="extract-content" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807557 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38306228-98ad-4fd2-ad14-8658436aed42" containerName="extract-utilities" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807565 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="38306228-98ad-4fd2-ad14-8658436aed42" containerName="extract-utilities" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807576 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerName="extract-utilities" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807584 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerName="extract-utilities" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807596 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38306228-98ad-4fd2-ad14-8658436aed42" containerName="extract-content" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807605 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="38306228-98ad-4fd2-ad14-8658436aed42" containerName="extract-content" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807632 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerName="extract-content" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807640 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerName="extract-content" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807658 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerName="extract-content" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807666 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerName="extract-content" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807678 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38306228-98ad-4fd2-ad14-8658436aed42" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807687 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="38306228-98ad-4fd2-ad14-8658436aed42" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807702 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerName="extract-utilities" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807710 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerName="extract-utilities" Dec 01 03:11:45 crc kubenswrapper[4880]: E1201 03:11:45.807726 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807734 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807943 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa7c9e0-cb6b-450b-98cc-c641828b5608" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807962 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd4ada0-9a69-427f-a971-ca5d10cf7eab" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807975 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="38306228-98ad-4fd2-ad14-8658436aed42" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.807990 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc858bde-6dc3-4749-9957-82bdc6768e3b" containerName="registry-server" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.808829 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.821454 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4wcbf" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.821743 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.821949 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.822123 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.828804 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85997758bf-n4wz5"] Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.919218 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785c7f4d65-kwfr4"] Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.920298 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.922026 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.931824 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785c7f4d65-kwfr4"] Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.996101 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr9mz\" (UniqueName: \"kubernetes.io/projected/a3f784af-8b13-43a0-aec5-4d6d311fd941-kube-api-access-qr9mz\") pod \"dnsmasq-dns-85997758bf-n4wz5\" (UID: \"a3f784af-8b13-43a0-aec5-4d6d311fd941\") " pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:11:45 crc kubenswrapper[4880]: I1201 03:11:45.996199 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f784af-8b13-43a0-aec5-4d6d311fd941-config\") pod \"dnsmasq-dns-85997758bf-n4wz5\" (UID: \"a3f784af-8b13-43a0-aec5-4d6d311fd941\") " pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.097689 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9mz\" (UniqueName: \"kubernetes.io/projected/a3f784af-8b13-43a0-aec5-4d6d311fd941-kube-api-access-qr9mz\") pod \"dnsmasq-dns-85997758bf-n4wz5\" (UID: \"a3f784af-8b13-43a0-aec5-4d6d311fd941\") " pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.097743 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-dns-svc\") pod \"dnsmasq-dns-785c7f4d65-kwfr4\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.097780 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmh7d\" (UniqueName: \"kubernetes.io/projected/11845a43-e0cb-4e63-8889-6f082cdab81e-kube-api-access-pmh7d\") pod \"dnsmasq-dns-785c7f4d65-kwfr4\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.097804 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-config\") pod \"dnsmasq-dns-785c7f4d65-kwfr4\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.098250 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f784af-8b13-43a0-aec5-4d6d311fd941-config\") pod \"dnsmasq-dns-85997758bf-n4wz5\" (UID: \"a3f784af-8b13-43a0-aec5-4d6d311fd941\") " pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.099313 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f784af-8b13-43a0-aec5-4d6d311fd941-config\") pod \"dnsmasq-dns-85997758bf-n4wz5\" (UID: \"a3f784af-8b13-43a0-aec5-4d6d311fd941\") " pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.121365 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr9mz\" (UniqueName: \"kubernetes.io/projected/a3f784af-8b13-43a0-aec5-4d6d311fd941-kube-api-access-qr9mz\") pod \"dnsmasq-dns-85997758bf-n4wz5\" (UID: \"a3f784af-8b13-43a0-aec5-4d6d311fd941\") " pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.141464 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.199505 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmh7d\" (UniqueName: \"kubernetes.io/projected/11845a43-e0cb-4e63-8889-6f082cdab81e-kube-api-access-pmh7d\") pod \"dnsmasq-dns-785c7f4d65-kwfr4\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.200093 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-config\") pod \"dnsmasq-dns-785c7f4d65-kwfr4\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.200292 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-dns-svc\") pod \"dnsmasq-dns-785c7f4d65-kwfr4\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.201002 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-config\") pod \"dnsmasq-dns-785c7f4d65-kwfr4\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.201049 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-dns-svc\") pod \"dnsmasq-dns-785c7f4d65-kwfr4\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.241574 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmh7d\" (UniqueName: \"kubernetes.io/projected/11845a43-e0cb-4e63-8889-6f082cdab81e-kube-api-access-pmh7d\") pod \"dnsmasq-dns-785c7f4d65-kwfr4\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.533575 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.609750 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85997758bf-n4wz5"] Dec 01 03:11:46 crc kubenswrapper[4880]: W1201 03:11:46.627236 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3f784af_8b13_43a0_aec5_4d6d311fd941.slice/crio-161be8eaef69e44cef13fc6b9ab5e456f7dd5d53b508408e7725984b99b86df9 WatchSource:0}: Error finding container 161be8eaef69e44cef13fc6b9ab5e456f7dd5d53b508408e7725984b99b86df9: Status 404 returned error can't find the container with id 161be8eaef69e44cef13fc6b9ab5e456f7dd5d53b508408e7725984b99b86df9 Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.648231 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 03:11:46 crc kubenswrapper[4880]: I1201 03:11:46.973625 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785c7f4d65-kwfr4"] Dec 01 03:11:46 crc kubenswrapper[4880]: W1201 03:11:46.990384 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11845a43_e0cb_4e63_8889_6f082cdab81e.slice/crio-99686f422795b91236c8f6561975aef177d77b47bd7c705405425353ba2a1e4b WatchSource:0}: Error finding container 99686f422795b91236c8f6561975aef177d77b47bd7c705405425353ba2a1e4b: Status 404 returned error can't find the container with id 99686f422795b91236c8f6561975aef177d77b47bd7c705405425353ba2a1e4b Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.245268 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785c7f4d65-kwfr4"] Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.275374 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86f84f895-7n558"] Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.279746 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.284646 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86f84f895-7n558"] Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.311969 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" event={"ID":"11845a43-e0cb-4e63-8889-6f082cdab81e","Type":"ContainerStarted","Data":"99686f422795b91236c8f6561975aef177d77b47bd7c705405425353ba2a1e4b"} Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.319170 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-config\") pod \"dnsmasq-dns-86f84f895-7n558\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.319233 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-dns-svc\") pod \"dnsmasq-dns-86f84f895-7n558\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.319257 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lptvv\" (UniqueName: \"kubernetes.io/projected/dec59321-c9ec-4666-8101-e03922db2a16-kube-api-access-lptvv\") pod \"dnsmasq-dns-86f84f895-7n558\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.321928 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85997758bf-n4wz5" event={"ID":"a3f784af-8b13-43a0-aec5-4d6d311fd941","Type":"ContainerStarted","Data":"161be8eaef69e44cef13fc6b9ab5e456f7dd5d53b508408e7725984b99b86df9"} Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.368766 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.368811 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.420890 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-config\") pod \"dnsmasq-dns-86f84f895-7n558\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.421074 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-dns-svc\") pod \"dnsmasq-dns-86f84f895-7n558\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.421102 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lptvv\" (UniqueName: \"kubernetes.io/projected/dec59321-c9ec-4666-8101-e03922db2a16-kube-api-access-lptvv\") pod \"dnsmasq-dns-86f84f895-7n558\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.422202 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-dns-svc\") pod \"dnsmasq-dns-86f84f895-7n558\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.429985 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-config\") pod \"dnsmasq-dns-86f84f895-7n558\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.459746 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lptvv\" (UniqueName: \"kubernetes.io/projected/dec59321-c9ec-4666-8101-e03922db2a16-kube-api-access-lptvv\") pod \"dnsmasq-dns-86f84f895-7n558\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.608082 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.925046 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85997758bf-n4wz5"] Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.969165 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b4db87645-72crn"] Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.970535 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:47 crc kubenswrapper[4880]: I1201 03:11:47.976292 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b4db87645-72crn"] Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.033413 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-dns-svc\") pod \"dnsmasq-dns-6b4db87645-72crn\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.034217 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpv56\" (UniqueName: \"kubernetes.io/projected/2742ec08-369d-4f37-85d8-e1b5cb89a51f-kube-api-access-mpv56\") pod \"dnsmasq-dns-6b4db87645-72crn\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.034272 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-config\") pod \"dnsmasq-dns-6b4db87645-72crn\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.134995 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpv56\" (UniqueName: \"kubernetes.io/projected/2742ec08-369d-4f37-85d8-e1b5cb89a51f-kube-api-access-mpv56\") pod \"dnsmasq-dns-6b4db87645-72crn\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.135052 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-config\") pod \"dnsmasq-dns-6b4db87645-72crn\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.135127 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-dns-svc\") pod \"dnsmasq-dns-6b4db87645-72crn\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.136004 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-dns-svc\") pod \"dnsmasq-dns-6b4db87645-72crn\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.136045 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-config\") pod \"dnsmasq-dns-6b4db87645-72crn\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.172544 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpv56\" (UniqueName: \"kubernetes.io/projected/2742ec08-369d-4f37-85d8-e1b5cb89a51f-kube-api-access-mpv56\") pod \"dnsmasq-dns-6b4db87645-72crn\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.303645 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.372491 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86f84f895-7n558"] Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.469234 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.470738 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.473294 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5hwf6" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.474081 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.474203 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.474369 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.474550 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.474795 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.478385 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.482390 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.653685 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.653750 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.653935 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.653960 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.654051 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.654091 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.654248 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.654311 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.654371 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.654400 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrbgp\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-kube-api-access-rrbgp\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.654502 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756305 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756367 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756403 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756430 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756496 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756519 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756550 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756582 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756623 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756649 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrbgp\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-kube-api-access-rrbgp\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.756671 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.757306 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.758174 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.758518 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.758556 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.760915 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.761379 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.768226 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.774173 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.774389 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.791967 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.803116 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrbgp\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-kube-api-access-rrbgp\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.812854 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.815438 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b4db87645-72crn"] Dec 01 03:11:48 crc kubenswrapper[4880]: I1201 03:11:48.821278 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:11:48 crc kubenswrapper[4880]: W1201 03:11:48.822072 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2742ec08_369d_4f37_85d8_e1b5cb89a51f.slice/crio-1a8a0dac02b27d6a43888dd394014cfdd3888c670b5d764748cf5929d9bc00fc WatchSource:0}: Error finding container 1a8a0dac02b27d6a43888dd394014cfdd3888c670b5d764748cf5929d9bc00fc: Status 404 returned error can't find the container with id 1a8a0dac02b27d6a43888dd394014cfdd3888c670b5d764748cf5929d9bc00fc Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.122697 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.128749 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.132475 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.139756 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.139793 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.139960 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.139970 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.140268 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.140486 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4jtkx" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.149262 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.263770 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.263819 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.263839 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgn98\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-kube-api-access-sgn98\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.263853 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-config-data\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.263891 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.263912 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7b466f3-1cab-4282-963d-2cf055d1514f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.263928 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.263983 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.264049 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.264086 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.264145 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7b466f3-1cab-4282-963d-2cf055d1514f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.354653 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f84f895-7n558" event={"ID":"dec59321-c9ec-4666-8101-e03922db2a16","Type":"ContainerStarted","Data":"b38c56adf5fec452f515e8d7eeb1830976b21543ca2319d95ba4d334a8c74d7f"} Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.358944 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4db87645-72crn" event={"ID":"2742ec08-369d-4f37-85d8-e1b5cb89a51f","Type":"ContainerStarted","Data":"1a8a0dac02b27d6a43888dd394014cfdd3888c670b5d764748cf5929d9bc00fc"} Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365539 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-config-data\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365584 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365604 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7b466f3-1cab-4282-963d-2cf055d1514f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365620 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365779 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365820 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365870 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365911 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7b466f3-1cab-4282-963d-2cf055d1514f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365942 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365964 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.365996 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgn98\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-kube-api-access-sgn98\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.367315 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-config-data\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.368020 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.368054 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.368546 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.368722 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.369049 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.379665 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.380511 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7b466f3-1cab-4282-963d-2cf055d1514f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.383797 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.400973 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7b466f3-1cab-4282-963d-2cf055d1514f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.401039 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgn98\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-kube-api-access-sgn98\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.406204 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.417522 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " pod="openstack/rabbitmq-server-0" Dec 01 03:11:49 crc kubenswrapper[4880]: I1201 03:11:49.461395 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.237266 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 03:11:50 crc kubenswrapper[4880]: W1201 03:11:50.295498 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7b466f3_1cab_4282_963d_2cf055d1514f.slice/crio-93b18078e1f7eb4a0510c1b412a4243507cf9c4916e87205ab606c82d43f26ad WatchSource:0}: Error finding container 93b18078e1f7eb4a0510c1b412a4243507cf9c4916e87205ab606c82d43f26ad: Status 404 returned error can't find the container with id 93b18078e1f7eb4a0510c1b412a4243507cf9c4916e87205ab606c82d43f26ad Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.372132 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd","Type":"ContainerStarted","Data":"b8bd0258bc7cddee83a8378215e1d6582db40e71dbfa6674786878c12a60d406"} Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.384699 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d7b466f3-1cab-4282-963d-2cf055d1514f","Type":"ContainerStarted","Data":"93b18078e1f7eb4a0510c1b412a4243507cf9c4916e87205ab606c82d43f26ad"} Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.640485 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.646070 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.650002 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2bsns" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.650170 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.650395 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.650590 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.651222 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.662356 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.797090 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94a77101-889e-41cf-adfd-a563ce823710-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.797140 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94a77101-889e-41cf-adfd-a563ce823710-config-data-generated\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.797173 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.797190 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzpwr\" (UniqueName: \"kubernetes.io/projected/94a77101-889e-41cf-adfd-a563ce823710-kube-api-access-rzpwr\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.797225 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94a77101-889e-41cf-adfd-a563ce823710-config-data-default\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.797254 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a77101-889e-41cf-adfd-a563ce823710-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.800079 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94a77101-889e-41cf-adfd-a563ce823710-kolla-config\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.800112 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94a77101-889e-41cf-adfd-a563ce823710-operator-scripts\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.901342 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94a77101-889e-41cf-adfd-a563ce823710-config-data-default\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.901389 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a77101-889e-41cf-adfd-a563ce823710-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.901439 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94a77101-889e-41cf-adfd-a563ce823710-kolla-config\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.901485 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94a77101-889e-41cf-adfd-a563ce823710-operator-scripts\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.901541 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94a77101-889e-41cf-adfd-a563ce823710-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.901564 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94a77101-889e-41cf-adfd-a563ce823710-config-data-generated\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.901593 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.901609 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzpwr\" (UniqueName: \"kubernetes.io/projected/94a77101-889e-41cf-adfd-a563ce823710-kube-api-access-rzpwr\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.905318 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/94a77101-889e-41cf-adfd-a563ce823710-config-data-default\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.907171 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94a77101-889e-41cf-adfd-a563ce823710-operator-scripts\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.910462 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/94a77101-889e-41cf-adfd-a563ce823710-config-data-generated\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.910816 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.910970 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a77101-889e-41cf-adfd-a563ce823710-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.911414 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94a77101-889e-41cf-adfd-a563ce823710-kolla-config\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.933211 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzpwr\" (UniqueName: \"kubernetes.io/projected/94a77101-889e-41cf-adfd-a563ce823710-kube-api-access-rzpwr\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:50 crc kubenswrapper[4880]: I1201 03:11:50.941454 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/94a77101-889e-41cf-adfd-a563ce823710-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:51 crc kubenswrapper[4880]: I1201 03:11:51.004736 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"94a77101-889e-41cf-adfd-a563ce823710\") " pod="openstack/openstack-galera-0" Dec 01 03:11:51 crc kubenswrapper[4880]: I1201 03:11:51.270632 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 01 03:11:51 crc kubenswrapper[4880]: I1201 03:11:51.955210 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.245687 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.248132 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.252113 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.253017 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7km8v" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.254346 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.258380 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.263902 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.446877 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/70645748-70d3-43e0-a111-440adaacf742-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.446921 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70645748-70d3-43e0-a111-440adaacf742-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.446939 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87pcr\" (UniqueName: \"kubernetes.io/projected/70645748-70d3-43e0-a111-440adaacf742-kube-api-access-87pcr\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.446974 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/70645748-70d3-43e0-a111-440adaacf742-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.447033 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/70645748-70d3-43e0-a111-440adaacf742-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.447060 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.447086 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70645748-70d3-43e0-a111-440adaacf742-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.447112 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/70645748-70d3-43e0-a111-440adaacf742-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.467380 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"94a77101-889e-41cf-adfd-a563ce823710","Type":"ContainerStarted","Data":"39a665827ec162192b6b9cc580b36482c30e53d8933998e1904fd206f1acade6"} Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.551334 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/70645748-70d3-43e0-a111-440adaacf742-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.551382 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.551411 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70645748-70d3-43e0-a111-440adaacf742-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.551443 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/70645748-70d3-43e0-a111-440adaacf742-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.551505 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/70645748-70d3-43e0-a111-440adaacf742-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.551529 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70645748-70d3-43e0-a111-440adaacf742-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.551549 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87pcr\" (UniqueName: \"kubernetes.io/projected/70645748-70d3-43e0-a111-440adaacf742-kube-api-access-87pcr\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.551574 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/70645748-70d3-43e0-a111-440adaacf742-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.552478 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/70645748-70d3-43e0-a111-440adaacf742-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.553525 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/70645748-70d3-43e0-a111-440adaacf742-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.553675 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.558377 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/70645748-70d3-43e0-a111-440adaacf742-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.558624 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70645748-70d3-43e0-a111-440adaacf742-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.561664 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70645748-70d3-43e0-a111-440adaacf742-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.575656 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/70645748-70d3-43e0-a111-440adaacf742-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.584000 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.619201 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87pcr\" (UniqueName: \"kubernetes.io/projected/70645748-70d3-43e0-a111-440adaacf742-kube-api-access-87pcr\") pod \"openstack-cell1-galera-0\" (UID: \"70645748-70d3-43e0-a111-440adaacf742\") " pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.623984 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.628470 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.636588 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.636913 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.660115 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4x8lm" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.688337 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.761603 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-config-data\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.767107 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.767167 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-kolla-config\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.767189 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.767264 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hg25\" (UniqueName: \"kubernetes.io/projected/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-kube-api-access-5hg25\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.868359 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-config-data\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.868450 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.868474 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-kolla-config\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.868492 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.868534 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hg25\" (UniqueName: \"kubernetes.io/projected/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-kube-api-access-5hg25\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.870229 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-kolla-config\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.871155 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-config-data\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.884608 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.888699 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.892942 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hg25\" (UniqueName: \"kubernetes.io/projected/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-kube-api-access-5hg25\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:52 crc kubenswrapper[4880]: I1201 03:11:52.894355 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5c6a675-54ea-4a6f-b7e3-637720fb8c07-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d5c6a675-54ea-4a6f-b7e3-637720fb8c07\") " pod="openstack/memcached-0" Dec 01 03:11:53 crc kubenswrapper[4880]: I1201 03:11:53.054806 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 01 03:11:53 crc kubenswrapper[4880]: I1201 03:11:53.525468 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 03:11:53 crc kubenswrapper[4880]: I1201 03:11:53.689011 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 01 03:11:54 crc kubenswrapper[4880]: I1201 03:11:54.483260 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 03:11:54 crc kubenswrapper[4880]: I1201 03:11:54.484477 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 03:11:54 crc kubenswrapper[4880]: I1201 03:11:54.490169 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-98dnm" Dec 01 03:11:54 crc kubenswrapper[4880]: I1201 03:11:54.503034 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 03:11:54 crc kubenswrapper[4880]: I1201 03:11:54.605439 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpq69\" (UniqueName: \"kubernetes.io/projected/3d04ac9c-b88e-44b8-92a6-293f737a6390-kube-api-access-mpq69\") pod \"kube-state-metrics-0\" (UID: \"3d04ac9c-b88e-44b8-92a6-293f737a6390\") " pod="openstack/kube-state-metrics-0" Dec 01 03:11:54 crc kubenswrapper[4880]: I1201 03:11:54.706731 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpq69\" (UniqueName: \"kubernetes.io/projected/3d04ac9c-b88e-44b8-92a6-293f737a6390-kube-api-access-mpq69\") pod \"kube-state-metrics-0\" (UID: \"3d04ac9c-b88e-44b8-92a6-293f737a6390\") " pod="openstack/kube-state-metrics-0" Dec 01 03:11:54 crc kubenswrapper[4880]: I1201 03:11:54.736338 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpq69\" (UniqueName: \"kubernetes.io/projected/3d04ac9c-b88e-44b8-92a6-293f737a6390-kube-api-access-mpq69\") pod \"kube-state-metrics-0\" (UID: \"3d04ac9c-b88e-44b8-92a6-293f737a6390\") " pod="openstack/kube-state-metrics-0" Dec 01 03:11:54 crc kubenswrapper[4880]: I1201 03:11:54.823660 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 03:11:58 crc kubenswrapper[4880]: W1201 03:11:58.294023 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70645748_70d3_43e0_a111_440adaacf742.slice/crio-97d5960f785feec39db28266454d26e7d81abf87ebe2ff03cd2325c53add9d89 WatchSource:0}: Error finding container 97d5960f785feec39db28266454d26e7d81abf87ebe2ff03cd2325c53add9d89: Status 404 returned error can't find the container with id 97d5960f785feec39db28266454d26e7d81abf87ebe2ff03cd2325c53add9d89 Dec 01 03:11:58 crc kubenswrapper[4880]: I1201 03:11:58.574830 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"70645748-70d3-43e0-a111-440adaacf742","Type":"ContainerStarted","Data":"97d5960f785feec39db28266454d26e7d81abf87ebe2ff03cd2325c53add9d89"} Dec 01 03:11:58 crc kubenswrapper[4880]: I1201 03:11:58.576537 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d5c6a675-54ea-4a6f-b7e3-637720fb8c07","Type":"ContainerStarted","Data":"10ebe38b684bde28eadded0cf2121771c6965b277f2c87a86b0fcca34ba48bc6"} Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.275539 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-m4jtm"] Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.276451 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.278575 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nl2fm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.282724 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.285384 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.299052 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-8c4h4"] Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.300836 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.316332 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4jtm"] Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.325804 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-8c4h4"] Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385158 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f330d83a-b34f-491b-ad56-07e6bb519191-scripts\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385373 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f330d83a-b34f-491b-ad56-07e6bb519191-var-log-ovn\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385489 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-etc-ovs\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385555 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f330d83a-b34f-491b-ad56-07e6bb519191-ovn-controller-tls-certs\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385676 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbzkr\" (UniqueName: \"kubernetes.io/projected/f330d83a-b34f-491b-ad56-07e6bb519191-kube-api-access-cbzkr\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385741 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-scripts\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385773 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-var-run\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385803 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-var-log\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385825 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-var-lib\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385902 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f330d83a-b34f-491b-ad56-07e6bb519191-var-run\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.385928 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f330d83a-b34f-491b-ad56-07e6bb519191-combined-ca-bundle\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.386028 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxgpn\" (UniqueName: \"kubernetes.io/projected/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-kube-api-access-vxgpn\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.386062 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f330d83a-b34f-491b-ad56-07e6bb519191-var-run-ovn\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.488306 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxgpn\" (UniqueName: \"kubernetes.io/projected/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-kube-api-access-vxgpn\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.488435 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f330d83a-b34f-491b-ad56-07e6bb519191-var-run-ovn\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.488520 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f330d83a-b34f-491b-ad56-07e6bb519191-scripts\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.488677 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f330d83a-b34f-491b-ad56-07e6bb519191-var-log-ovn\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.488752 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-etc-ovs\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.488851 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f330d83a-b34f-491b-ad56-07e6bb519191-ovn-controller-tls-certs\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.488957 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzkr\" (UniqueName: \"kubernetes.io/projected/f330d83a-b34f-491b-ad56-07e6bb519191-kube-api-access-cbzkr\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.489021 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-scripts\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.489080 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-var-run\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.489137 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-var-log\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.489179 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-var-lib\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.489194 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f330d83a-b34f-491b-ad56-07e6bb519191-var-run-ovn\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.489244 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f330d83a-b34f-491b-ad56-07e6bb519191-var-run\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.489300 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f330d83a-b34f-491b-ad56-07e6bb519191-combined-ca-bundle\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.489409 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-var-run\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.490129 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f330d83a-b34f-491b-ad56-07e6bb519191-var-run\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.490347 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-etc-ovs\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.490374 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-var-log\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.490731 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-var-lib\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.490867 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f330d83a-b34f-491b-ad56-07e6bb519191-var-log-ovn\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.491042 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f330d83a-b34f-491b-ad56-07e6bb519191-scripts\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.492564 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-scripts\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.498451 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f330d83a-b34f-491b-ad56-07e6bb519191-ovn-controller-tls-certs\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.505740 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f330d83a-b34f-491b-ad56-07e6bb519191-combined-ca-bundle\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.519583 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxgpn\" (UniqueName: \"kubernetes.io/projected/f5c5d8e5-716c-4a5f-b46f-7c31779177ed-kube-api-access-vxgpn\") pod \"ovn-controller-ovs-8c4h4\" (UID: \"f5c5d8e5-716c-4a5f-b46f-7c31779177ed\") " pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.528543 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzkr\" (UniqueName: \"kubernetes.io/projected/f330d83a-b34f-491b-ad56-07e6bb519191-kube-api-access-cbzkr\") pod \"ovn-controller-m4jtm\" (UID: \"f330d83a-b34f-491b-ad56-07e6bb519191\") " pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.598224 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4jtm" Dec 01 03:11:59 crc kubenswrapper[4880]: I1201 03:11:59.615688 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.545717 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.548666 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.557593 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.557915 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4d89w" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.558237 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.558382 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.558401 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.565593 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.621637 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3979ab5b-810b-408b-ad96-6f9c6a3baff1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.621689 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3979ab5b-810b-408b-ad96-6f9c6a3baff1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.621754 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3979ab5b-810b-408b-ad96-6f9c6a3baff1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.621781 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3979ab5b-810b-408b-ad96-6f9c6a3baff1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.621805 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3979ab5b-810b-408b-ad96-6f9c6a3baff1-config\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.621832 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czjr2\" (UniqueName: \"kubernetes.io/projected/3979ab5b-810b-408b-ad96-6f9c6a3baff1-kube-api-access-czjr2\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.621890 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3979ab5b-810b-408b-ad96-6f9c6a3baff1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.621925 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.723473 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3979ab5b-810b-408b-ad96-6f9c6a3baff1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.723711 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.723828 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3979ab5b-810b-408b-ad96-6f9c6a3baff1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.724014 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3979ab5b-810b-408b-ad96-6f9c6a3baff1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.724159 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3979ab5b-810b-408b-ad96-6f9c6a3baff1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.724265 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3979ab5b-810b-408b-ad96-6f9c6a3baff1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.724314 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3979ab5b-810b-408b-ad96-6f9c6a3baff1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.724164 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.724436 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3979ab5b-810b-408b-ad96-6f9c6a3baff1-config\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.725170 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czjr2\" (UniqueName: \"kubernetes.io/projected/3979ab5b-810b-408b-ad96-6f9c6a3baff1-kube-api-access-czjr2\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.726017 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3979ab5b-810b-408b-ad96-6f9c6a3baff1-config\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.729197 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3979ab5b-810b-408b-ad96-6f9c6a3baff1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.731705 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3979ab5b-810b-408b-ad96-6f9c6a3baff1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.734615 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.736386 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.737582 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3979ab5b-810b-408b-ad96-6f9c6a3baff1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.740395 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.740635 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.740883 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.741177 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mdmt7" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.747082 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3979ab5b-810b-408b-ad96-6f9c6a3baff1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.750845 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czjr2\" (UniqueName: \"kubernetes.io/projected/3979ab5b-810b-408b-ad96-6f9c6a3baff1-kube-api-access-czjr2\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.785134 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3979ab5b-810b-408b-ad96-6f9c6a3baff1\") " pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.815111 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.890404 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.927427 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.927511 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.927535 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.927574 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-config\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.927606 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.927621 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq4nh\" (UniqueName: \"kubernetes.io/projected/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-kube-api-access-hq4nh\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.927656 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:01 crc kubenswrapper[4880]: I1201 03:12:01.927675 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.028694 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.028752 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.028806 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-config\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.028833 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.028857 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq4nh\" (UniqueName: \"kubernetes.io/projected/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-kube-api-access-hq4nh\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.028929 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.028955 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.029004 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.029084 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.029534 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.029702 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-config\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.030536 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.035628 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.037168 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.044862 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.061029 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq4nh\" (UniqueName: \"kubernetes.io/projected/7e81624d-2ddb-4582-a1aa-d66e6fb1c781-kube-api-access-hq4nh\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.069734 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7e81624d-2ddb-4582-a1aa-d66e6fb1c781\") " pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:02 crc kubenswrapper[4880]: I1201 03:12:02.136185 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:09 crc kubenswrapper[4880]: E1201 03:12:09.540937 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:09 crc kubenswrapper[4880]: E1201 03:12:09.541273 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:09 crc kubenswrapper[4880]: E1201 03:12:09.541418 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrbgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:12:09 crc kubenswrapper[4880]: E1201 03:12:09.542568 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" Dec 01 03:12:09 crc kubenswrapper[4880]: E1201 03:12:09.656276 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" Dec 01 03:12:11 crc kubenswrapper[4880]: E1201 03:12:11.380320 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:11 crc kubenswrapper[4880]: E1201 03:12:11.380722 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:11 crc kubenswrapper[4880]: E1201 03:12:11.380857 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzpwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(94a77101-889e-41cf-adfd-a563ce823710): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:12:11 crc kubenswrapper[4880]: E1201 03:12:11.383509 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="94a77101-889e-41cf-adfd-a563ce823710" Dec 01 03:12:11 crc kubenswrapper[4880]: E1201 03:12:11.680946 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/openstack-galera-0" podUID="94a77101-889e-41cf-adfd-a563ce823710" Dec 01 03:12:15 crc kubenswrapper[4880]: E1201 03:12:15.255587 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:15 crc kubenswrapper[4880]: E1201 03:12:15.256179 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:15 crc kubenswrapper[4880]: E1201 03:12:15.256402 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgn98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(d7b466f3-1cab-4282-963d-2cf055d1514f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:12:15 crc kubenswrapper[4880]: E1201 03:12:15.257625 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" Dec 01 03:12:15 crc kubenswrapper[4880]: E1201 03:12:15.710053 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/rabbitmq-server-0" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" Dec 01 03:12:16 crc kubenswrapper[4880]: I1201 03:12:16.569891 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.827557 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.827626 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.827749 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmh7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-785c7f4d65-kwfr4_openstack(11845a43-e0cb-4e63-8889-6f082cdab81e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.828914 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" podUID="11845a43-e0cb-4e63-8889-6f082cdab81e" Dec 01 03:12:16 crc kubenswrapper[4880]: W1201 03:12:16.861301 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3979ab5b_810b_408b_ad96_6f9c6a3baff1.slice/crio-6a0b2e3e99488e048890c378c47dba8c309df663ed57d9ff5440290c7c3f423c WatchSource:0}: Error finding container 6a0b2e3e99488e048890c378c47dba8c309df663ed57d9ff5440290c7c3f423c: Status 404 returned error can't find the container with id 6a0b2e3e99488e048890c378c47dba8c309df663ed57d9ff5440290c7c3f423c Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.882892 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.882945 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.883047 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpv56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6b4db87645-72crn_openstack(2742ec08-369d-4f37-85d8-e1b5cb89a51f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.884343 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6b4db87645-72crn" podUID="2742ec08-369d-4f37-85d8-e1b5cb89a51f" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.885220 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.885248 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.885429 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qr9mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-85997758bf-n4wz5_openstack(a3f784af-8b13-43a0-aec5-4d6d311fd941): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.886775 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-85997758bf-n4wz5" podUID="a3f784af-8b13-43a0-aec5-4d6d311fd941" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.928185 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.928236 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.928405 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lptvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86f84f895-7n558_openstack(dec59321-c9ec-4666-8101-e03922db2a16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:12:16 crc kubenswrapper[4880]: E1201 03:12:16.929655 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-86f84f895-7n558" podUID="dec59321-c9ec-4666-8101-e03922db2a16" Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.313498 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4jtm"] Dec 01 03:12:17 crc kubenswrapper[4880]: W1201 03:12:17.332032 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf330d83a_b34f_491b_ad56_07e6bb519191.slice/crio-648da356a53292ea8aadb125cef959d2c1da813885429931bd337a5349865116 WatchSource:0}: Error finding container 648da356a53292ea8aadb125cef959d2c1da813885429931bd337a5349865116: Status 404 returned error can't find the container with id 648da356a53292ea8aadb125cef959d2c1da813885429931bd337a5349865116 Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.369087 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.369140 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.369183 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.369829 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d75a52daa0e2a7f2599a1e892312d328b61520f98232bcc7cdb455390b50937"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.369887 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://2d75a52daa0e2a7f2599a1e892312d328b61520f98232bcc7cdb455390b50937" gracePeriod=600 Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.378010 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 03:12:17 crc kubenswrapper[4880]: W1201 03:12:17.387854 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d04ac9c_b88e_44b8_92a6_293f737a6390.slice/crio-cba3ade1ef7bf5640d5786478d9ab9730672fe5df79fe7ce5fe26606f489debd WatchSource:0}: Error finding container cba3ade1ef7bf5640d5786478d9ab9730672fe5df79fe7ce5fe26606f489debd: Status 404 returned error can't find the container with id cba3ade1ef7bf5640d5786478d9ab9730672fe5df79fe7ce5fe26606f489debd Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.562720 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-8c4h4"] Dec 01 03:12:17 crc kubenswrapper[4880]: W1201 03:12:17.566529 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5c5d8e5_716c_4a5f_b46f_7c31779177ed.slice/crio-a1376d119882ceee6d0c24b97220e117d178018d603b5d242803709149745147 WatchSource:0}: Error finding container a1376d119882ceee6d0c24b97220e117d178018d603b5d242803709149745147: Status 404 returned error can't find the container with id a1376d119882ceee6d0c24b97220e117d178018d603b5d242803709149745147 Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.656118 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 03:12:17 crc kubenswrapper[4880]: W1201 03:12:17.659399 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e81624d_2ddb_4582_a1aa_d66e6fb1c781.slice/crio-f3ed229ff415585607bce73c7bd1ad19456f4e150a420e940767357672e4d411 WatchSource:0}: Error finding container f3ed229ff415585607bce73c7bd1ad19456f4e150a420e940767357672e4d411: Status 404 returned error can't find the container with id f3ed229ff415585607bce73c7bd1ad19456f4e150a420e940767357672e4d411 Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.721258 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4jtm" event={"ID":"f330d83a-b34f-491b-ad56-07e6bb519191","Type":"ContainerStarted","Data":"648da356a53292ea8aadb125cef959d2c1da813885429931bd337a5349865116"} Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.724064 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="2d75a52daa0e2a7f2599a1e892312d328b61520f98232bcc7cdb455390b50937" exitCode=0 Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.724172 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"2d75a52daa0e2a7f2599a1e892312d328b61520f98232bcc7cdb455390b50937"} Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.724200 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"34d41201e834b41f2c5149b0278e08d421cef1c0ed99b101f5ffb45ff209ff57"} Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.724234 4880 scope.go:117] "RemoveContainer" containerID="9e7e08cb7118ecb74645ada937d5a46b564fd983b8d99301ceea950ba427688d" Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.728080 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"70645748-70d3-43e0-a111-440adaacf742","Type":"ContainerStarted","Data":"d138c804328486b5805c48f3e1bb983b5668228db958c9f1bd29ed63f58e4272"} Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.730623 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d5c6a675-54ea-4a6f-b7e3-637720fb8c07","Type":"ContainerStarted","Data":"156d8b90c7033b5d40e52077762f7618b5bdd164f980a54da59c160b3bf3595d"} Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.731533 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.733114 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7e81624d-2ddb-4582-a1aa-d66e6fb1c781","Type":"ContainerStarted","Data":"f3ed229ff415585607bce73c7bd1ad19456f4e150a420e940767357672e4d411"} Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.735468 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3d04ac9c-b88e-44b8-92a6-293f737a6390","Type":"ContainerStarted","Data":"cba3ade1ef7bf5640d5786478d9ab9730672fe5df79fe7ce5fe26606f489debd"} Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.736619 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3979ab5b-810b-408b-ad96-6f9c6a3baff1","Type":"ContainerStarted","Data":"6a0b2e3e99488e048890c378c47dba8c309df663ed57d9ff5440290c7c3f423c"} Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.746755 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8c4h4" event={"ID":"f5c5d8e5-716c-4a5f-b46f-7c31779177ed","Type":"ContainerStarted","Data":"a1376d119882ceee6d0c24b97220e117d178018d603b5d242803709149745147"} Dec 01 03:12:17 crc kubenswrapper[4880]: E1201 03:12:17.747610 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/dnsmasq-dns-86f84f895-7n558" podUID="dec59321-c9ec-4666-8101-e03922db2a16" Dec 01 03:12:17 crc kubenswrapper[4880]: E1201 03:12:17.748005 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/dnsmasq-dns-6b4db87645-72crn" podUID="2742ec08-369d-4f37-85d8-e1b5cb89a51f" Dec 01 03:12:17 crc kubenswrapper[4880]: I1201 03:12:17.768432 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=7.170946221 podStartE2EDuration="25.768414694s" podCreationTimestamp="2025-12-01 03:11:52 +0000 UTC" firstStartedPulling="2025-12-01 03:11:58.296769856 +0000 UTC m=+947.808024228" lastFinishedPulling="2025-12-01 03:12:16.894238329 +0000 UTC m=+966.405492701" observedRunningTime="2025-12-01 03:12:17.760411054 +0000 UTC m=+967.271665426" watchObservedRunningTime="2025-12-01 03:12:17.768414694 +0000 UTC m=+967.279669066" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.253029 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.256390 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.287077 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-dns-svc\") pod \"11845a43-e0cb-4e63-8889-6f082cdab81e\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.287215 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-config\") pod \"11845a43-e0cb-4e63-8889-6f082cdab81e\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.287295 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmh7d\" (UniqueName: \"kubernetes.io/projected/11845a43-e0cb-4e63-8889-6f082cdab81e-kube-api-access-pmh7d\") pod \"11845a43-e0cb-4e63-8889-6f082cdab81e\" (UID: \"11845a43-e0cb-4e63-8889-6f082cdab81e\") " Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.287396 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr9mz\" (UniqueName: \"kubernetes.io/projected/a3f784af-8b13-43a0-aec5-4d6d311fd941-kube-api-access-qr9mz\") pod \"a3f784af-8b13-43a0-aec5-4d6d311fd941\" (UID: \"a3f784af-8b13-43a0-aec5-4d6d311fd941\") " Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.287471 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f784af-8b13-43a0-aec5-4d6d311fd941-config\") pod \"a3f784af-8b13-43a0-aec5-4d6d311fd941\" (UID: \"a3f784af-8b13-43a0-aec5-4d6d311fd941\") " Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.287690 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11845a43-e0cb-4e63-8889-6f082cdab81e" (UID: "11845a43-e0cb-4e63-8889-6f082cdab81e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.287862 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.288101 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3f784af-8b13-43a0-aec5-4d6d311fd941-config" (OuterVolumeSpecName: "config") pod "a3f784af-8b13-43a0-aec5-4d6d311fd941" (UID: "a3f784af-8b13-43a0-aec5-4d6d311fd941"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.289135 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-config" (OuterVolumeSpecName: "config") pod "11845a43-e0cb-4e63-8889-6f082cdab81e" (UID: "11845a43-e0cb-4e63-8889-6f082cdab81e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.292951 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11845a43-e0cb-4e63-8889-6f082cdab81e-kube-api-access-pmh7d" (OuterVolumeSpecName: "kube-api-access-pmh7d") pod "11845a43-e0cb-4e63-8889-6f082cdab81e" (UID: "11845a43-e0cb-4e63-8889-6f082cdab81e"). InnerVolumeSpecName "kube-api-access-pmh7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.293125 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f784af-8b13-43a0-aec5-4d6d311fd941-kube-api-access-qr9mz" (OuterVolumeSpecName: "kube-api-access-qr9mz") pod "a3f784af-8b13-43a0-aec5-4d6d311fd941" (UID: "a3f784af-8b13-43a0-aec5-4d6d311fd941"). InnerVolumeSpecName "kube-api-access-qr9mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.388562 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr9mz\" (UniqueName: \"kubernetes.io/projected/a3f784af-8b13-43a0-aec5-4d6d311fd941-kube-api-access-qr9mz\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.388591 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f784af-8b13-43a0-aec5-4d6d311fd941-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.388600 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11845a43-e0cb-4e63-8889-6f082cdab81e-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.388616 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmh7d\" (UniqueName: \"kubernetes.io/projected/11845a43-e0cb-4e63-8889-6f082cdab81e-kube-api-access-pmh7d\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.761426 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.761415 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785c7f4d65-kwfr4" event={"ID":"11845a43-e0cb-4e63-8889-6f082cdab81e","Type":"ContainerDied","Data":"99686f422795b91236c8f6561975aef177d77b47bd7c705405425353ba2a1e4b"} Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.762896 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85997758bf-n4wz5" event={"ID":"a3f784af-8b13-43a0-aec5-4d6d311fd941","Type":"ContainerDied","Data":"161be8eaef69e44cef13fc6b9ab5e456f7dd5d53b508408e7725984b99b86df9"} Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.763038 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85997758bf-n4wz5" Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.915035 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85997758bf-n4wz5"] Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.928006 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85997758bf-n4wz5"] Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.932206 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785c7f4d65-kwfr4"] Dec 01 03:12:18 crc kubenswrapper[4880]: I1201 03:12:18.935318 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785c7f4d65-kwfr4"] Dec 01 03:12:20 crc kubenswrapper[4880]: I1201 03:12:20.780614 4880 generic.go:334] "Generic (PLEG): container finished" podID="70645748-70d3-43e0-a111-440adaacf742" containerID="d138c804328486b5805c48f3e1bb983b5668228db958c9f1bd29ed63f58e4272" exitCode=0 Dec 01 03:12:20 crc kubenswrapper[4880]: I1201 03:12:20.780784 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"70645748-70d3-43e0-a111-440adaacf742","Type":"ContainerDied","Data":"d138c804328486b5805c48f3e1bb983b5668228db958c9f1bd29ed63f58e4272"} Dec 01 03:12:20 crc kubenswrapper[4880]: I1201 03:12:20.797602 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11845a43-e0cb-4e63-8889-6f082cdab81e" path="/var/lib/kubelet/pods/11845a43-e0cb-4e63-8889-6f082cdab81e/volumes" Dec 01 03:12:20 crc kubenswrapper[4880]: I1201 03:12:20.804797 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f784af-8b13-43a0-aec5-4d6d311fd941" path="/var/lib/kubelet/pods/a3f784af-8b13-43a0-aec5-4d6d311fd941/volumes" Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.807322 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"70645748-70d3-43e0-a111-440adaacf742","Type":"ContainerStarted","Data":"828aa8d549639290d427c4d7b0adedab165adf5df9010e383660d035ff74f1b8"} Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.809686 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8c4h4" event={"ID":"f5c5d8e5-716c-4a5f-b46f-7c31779177ed","Type":"ContainerStarted","Data":"fcac90ad9153ebf29c65849bb237b6e5ac426601f9e0bbfd94f1f9b5b959e395"} Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.812192 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7e81624d-2ddb-4582-a1aa-d66e6fb1c781","Type":"ContainerStarted","Data":"e1e3379d3e423cd25e40260b79f7f517be0170ad2139f48e066c4aa1bac1f86c"} Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.813692 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4jtm" event={"ID":"f330d83a-b34f-491b-ad56-07e6bb519191","Type":"ContainerStarted","Data":"1ae90f42f499bc0a18da81ce246946ae7dd1e5405b71987a8b46b9a61d844916"} Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.813835 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-m4jtm" Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.814814 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3d04ac9c-b88e-44b8-92a6-293f737a6390","Type":"ContainerStarted","Data":"10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac"} Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.814943 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.816139 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3979ab5b-810b-408b-ad96-6f9c6a3baff1","Type":"ContainerStarted","Data":"b89963342c90643081b230e9c8afa47a652a7c31c59fe3a1bcbb0953982ca326"} Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.835150 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.288086463 podStartE2EDuration="31.835053695s" podCreationTimestamp="2025-12-01 03:11:51 +0000 UTC" firstStartedPulling="2025-12-01 03:11:58.304337155 +0000 UTC m=+947.815591527" lastFinishedPulling="2025-12-01 03:12:16.851304377 +0000 UTC m=+966.362558759" observedRunningTime="2025-12-01 03:12:22.832644795 +0000 UTC m=+972.343899177" watchObservedRunningTime="2025-12-01 03:12:22.835053695 +0000 UTC m=+972.346308067" Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.863675 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=24.005931756 podStartE2EDuration="28.863655499s" podCreationTimestamp="2025-12-01 03:11:54 +0000 UTC" firstStartedPulling="2025-12-01 03:12:17.389271484 +0000 UTC m=+966.900525856" lastFinishedPulling="2025-12-01 03:12:22.246995217 +0000 UTC m=+971.758249599" observedRunningTime="2025-12-01 03:12:22.854132532 +0000 UTC m=+972.365386904" watchObservedRunningTime="2025-12-01 03:12:22.863655499 +0000 UTC m=+972.374909881" Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.886279 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.886536 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 01 03:12:22 crc kubenswrapper[4880]: I1201 03:12:22.899754 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-m4jtm" podStartSLOduration=19.613876502 podStartE2EDuration="23.899733141s" podCreationTimestamp="2025-12-01 03:11:59 +0000 UTC" firstStartedPulling="2025-12-01 03:12:17.346387003 +0000 UTC m=+966.857641375" lastFinishedPulling="2025-12-01 03:12:21.632243642 +0000 UTC m=+971.143498014" observedRunningTime="2025-12-01 03:12:22.893048174 +0000 UTC m=+972.404302556" watchObservedRunningTime="2025-12-01 03:12:22.899733141 +0000 UTC m=+972.410987513" Dec 01 03:12:23 crc kubenswrapper[4880]: I1201 03:12:23.056439 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 01 03:12:23 crc kubenswrapper[4880]: I1201 03:12:23.824273 4880 generic.go:334] "Generic (PLEG): container finished" podID="f5c5d8e5-716c-4a5f-b46f-7c31779177ed" containerID="fcac90ad9153ebf29c65849bb237b6e5ac426601f9e0bbfd94f1f9b5b959e395" exitCode=0 Dec 01 03:12:23 crc kubenswrapper[4880]: I1201 03:12:23.824346 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8c4h4" event={"ID":"f5c5d8e5-716c-4a5f-b46f-7c31779177ed","Type":"ContainerDied","Data":"fcac90ad9153ebf29c65849bb237b6e5ac426601f9e0bbfd94f1f9b5b959e395"} Dec 01 03:12:24 crc kubenswrapper[4880]: I1201 03:12:24.907754 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8c4h4" event={"ID":"f5c5d8e5-716c-4a5f-b46f-7c31779177ed","Type":"ContainerStarted","Data":"4b02328fa915ec52521bc737167706661874e415b29bcd394d9680b0466f3e95"} Dec 01 03:12:24 crc kubenswrapper[4880]: I1201 03:12:24.918931 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4db87645-72crn"] Dec 01 03:12:24 crc kubenswrapper[4880]: I1201 03:12:24.924882 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cf454749-vbp7t"] Dec 01 03:12:24 crc kubenswrapper[4880]: I1201 03:12:24.926058 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:24 crc kubenswrapper[4880]: I1201 03:12:24.968307 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cf454749-vbp7t"] Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.023648 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4tc\" (UniqueName: \"kubernetes.io/projected/ae5197a2-6d41-441b-b781-7b93ea7831f8-kube-api-access-dc4tc\") pod \"dnsmasq-dns-79cf454749-vbp7t\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.023711 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-config\") pod \"dnsmasq-dns-79cf454749-vbp7t\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.023731 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-dns-svc\") pod \"dnsmasq-dns-79cf454749-vbp7t\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.125023 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc4tc\" (UniqueName: \"kubernetes.io/projected/ae5197a2-6d41-441b-b781-7b93ea7831f8-kube-api-access-dc4tc\") pod \"dnsmasq-dns-79cf454749-vbp7t\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.125350 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-config\") pod \"dnsmasq-dns-79cf454749-vbp7t\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.125377 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-dns-svc\") pod \"dnsmasq-dns-79cf454749-vbp7t\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.126122 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-dns-svc\") pod \"dnsmasq-dns-79cf454749-vbp7t\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.128138 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-config\") pod \"dnsmasq-dns-79cf454749-vbp7t\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.151069 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc4tc\" (UniqueName: \"kubernetes.io/projected/ae5197a2-6d41-441b-b781-7b93ea7831f8-kube-api-access-dc4tc\") pod \"dnsmasq-dns-79cf454749-vbp7t\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.263420 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.780631 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.835670 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-dns-svc\") pod \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.836029 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpv56\" (UniqueName: \"kubernetes.io/projected/2742ec08-369d-4f37-85d8-e1b5cb89a51f-kube-api-access-mpv56\") pod \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.836091 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-config\") pod \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\" (UID: \"2742ec08-369d-4f37-85d8-e1b5cb89a51f\") " Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.836255 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2742ec08-369d-4f37-85d8-e1b5cb89a51f" (UID: "2742ec08-369d-4f37-85d8-e1b5cb89a51f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.836570 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-config" (OuterVolumeSpecName: "config") pod "2742ec08-369d-4f37-85d8-e1b5cb89a51f" (UID: "2742ec08-369d-4f37-85d8-e1b5cb89a51f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.837275 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.837300 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2742ec08-369d-4f37-85d8-e1b5cb89a51f-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.854774 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2742ec08-369d-4f37-85d8-e1b5cb89a51f-kube-api-access-mpv56" (OuterVolumeSpecName: "kube-api-access-mpv56") pod "2742ec08-369d-4f37-85d8-e1b5cb89a51f" (UID: "2742ec08-369d-4f37-85d8-e1b5cb89a51f"). InnerVolumeSpecName "kube-api-access-mpv56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.918696 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4db87645-72crn" event={"ID":"2742ec08-369d-4f37-85d8-e1b5cb89a51f","Type":"ContainerDied","Data":"1a8a0dac02b27d6a43888dd394014cfdd3888c670b5d764748cf5929d9bc00fc"} Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.918741 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4db87645-72crn" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.938914 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpv56\" (UniqueName: \"kubernetes.io/projected/2742ec08-369d-4f37-85d8-e1b5cb89a51f-kube-api-access-mpv56\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.984356 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4db87645-72crn"] Dec 01 03:12:25 crc kubenswrapper[4880]: I1201 03:12:25.992205 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b4db87645-72crn"] Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.055185 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.060122 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.061835 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-bsccm" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.062011 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.062119 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.062264 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.090929 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.141528 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/121483f7-5771-4af8-9777-b980cc1bd4ad-cache\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.141778 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/121483f7-5771-4af8-9777-b980cc1bd4ad-lock\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.141956 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.142066 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwrv7\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-kube-api-access-xwrv7\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.142161 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.243412 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.243972 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/121483f7-5771-4af8-9777-b980cc1bd4ad-cache\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: E1201 03:12:26.243849 4880 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 03:12:26 crc kubenswrapper[4880]: E1201 03:12:26.244639 4880 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 03:12:26 crc kubenswrapper[4880]: E1201 03:12:26.244704 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift podName:121483f7-5771-4af8-9777-b980cc1bd4ad nodeName:}" failed. No retries permitted until 2025-12-01 03:12:26.744674549 +0000 UTC m=+976.255928931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift") pod "swift-storage-0" (UID: "121483f7-5771-4af8-9777-b980cc1bd4ad") : configmap "swift-ring-files" not found Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.244613 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/121483f7-5771-4af8-9777-b980cc1bd4ad-lock\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.245020 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.245163 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwrv7\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-kube-api-access-xwrv7\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.244529 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/121483f7-5771-4af8-9777-b980cc1bd4ad-cache\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.245406 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.248361 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/121483f7-5771-4af8-9777-b980cc1bd4ad-lock\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.267917 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwrv7\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-kube-api-access-xwrv7\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.274158 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.755546 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:26 crc kubenswrapper[4880]: E1201 03:12:26.755706 4880 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 03:12:26 crc kubenswrapper[4880]: E1201 03:12:26.756044 4880 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 03:12:26 crc kubenswrapper[4880]: E1201 03:12:26.756087 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift podName:121483f7-5771-4af8-9777-b980cc1bd4ad nodeName:}" failed. No retries permitted until 2025-12-01 03:12:27.756073976 +0000 UTC m=+977.267328348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift") pod "swift-storage-0" (UID: "121483f7-5771-4af8-9777-b980cc1bd4ad") : configmap "swift-ring-files" not found Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.793407 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2742ec08-369d-4f37-85d8-e1b5cb89a51f" path="/var/lib/kubelet/pods/2742ec08-369d-4f37-85d8-e1b5cb89a51f/volumes" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.793948 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cf454749-vbp7t"] Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.925988 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-8c4h4" event={"ID":"f5c5d8e5-716c-4a5f-b46f-7c31779177ed","Type":"ContainerStarted","Data":"91e95fdc6f94408c8a5a597c38feb2e1b00dcd39f4304db6a00ce96239dfc2e1"} Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.932976 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" event={"ID":"ae5197a2-6d41-441b-b781-7b93ea7831f8","Type":"ContainerStarted","Data":"6b55699ec43ee02e244e3aba460ef89fda36097b360df67347103119fb178185"} Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.933030 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.933048 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.933604 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7e81624d-2ddb-4582-a1aa-d66e6fb1c781","Type":"ContainerStarted","Data":"e5e6d351811f849b69d4b9799410172504bd590c911143b71ce4babbeabb6b37"} Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.935482 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3979ab5b-810b-408b-ad96-6f9c6a3baff1","Type":"ContainerStarted","Data":"1ba982115eec744f52ab1f8515f6d29dfef0d72ade692fcbe182539965a5bcb0"} Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.937284 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"94a77101-889e-41cf-adfd-a563ce823710","Type":"ContainerStarted","Data":"15ea21aa525739e8d8dd4643d97f9eabb30e7ae69faf27a63aec00b3927d65f2"} Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.953967 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-8c4h4" podStartSLOduration=23.891009845 podStartE2EDuration="27.953949926s" podCreationTimestamp="2025-12-01 03:11:59 +0000 UTC" firstStartedPulling="2025-12-01 03:12:17.568393668 +0000 UTC m=+967.079648040" lastFinishedPulling="2025-12-01 03:12:21.631333749 +0000 UTC m=+971.142588121" observedRunningTime="2025-12-01 03:12:26.948705584 +0000 UTC m=+976.459959946" watchObservedRunningTime="2025-12-01 03:12:26.953949926 +0000 UTC m=+976.465204298" Dec 01 03:12:26 crc kubenswrapper[4880]: I1201 03:12:26.979660 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.351794557 podStartE2EDuration="26.979642351s" podCreationTimestamp="2025-12-01 03:12:00 +0000 UTC" firstStartedPulling="2025-12-01 03:12:16.865689896 +0000 UTC m=+966.376944268" lastFinishedPulling="2025-12-01 03:12:26.49353769 +0000 UTC m=+976.004792062" observedRunningTime="2025-12-01 03:12:26.97322476 +0000 UTC m=+976.484479132" watchObservedRunningTime="2025-12-01 03:12:26.979642351 +0000 UTC m=+976.490896723" Dec 01 03:12:27 crc kubenswrapper[4880]: I1201 03:12:27.034144 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.165463063 podStartE2EDuration="27.03412492s" podCreationTimestamp="2025-12-01 03:12:00 +0000 UTC" firstStartedPulling="2025-12-01 03:12:17.66176049 +0000 UTC m=+967.173014852" lastFinishedPulling="2025-12-01 03:12:26.530422337 +0000 UTC m=+976.041676709" observedRunningTime="2025-12-01 03:12:27.020913548 +0000 UTC m=+976.532167920" watchObservedRunningTime="2025-12-01 03:12:27.03412492 +0000 UTC m=+976.545379292" Dec 01 03:12:27 crc kubenswrapper[4880]: I1201 03:12:27.062669 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 01 03:12:27 crc kubenswrapper[4880]: I1201 03:12:27.137213 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:27 crc kubenswrapper[4880]: I1201 03:12:27.338989 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 01 03:12:27 crc kubenswrapper[4880]: I1201 03:12:27.803139 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:27 crc kubenswrapper[4880]: E1201 03:12:27.803305 4880 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 03:12:27 crc kubenswrapper[4880]: E1201 03:12:27.803338 4880 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 03:12:27 crc kubenswrapper[4880]: E1201 03:12:27.803425 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift podName:121483f7-5771-4af8-9777-b980cc1bd4ad nodeName:}" failed. No retries permitted until 2025-12-01 03:12:29.803399836 +0000 UTC m=+979.314654248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift") pod "swift-storage-0" (UID: "121483f7-5771-4af8-9777-b980cc1bd4ad") : configmap "swift-ring-files" not found Dec 01 03:12:27 crc kubenswrapper[4880]: I1201 03:12:27.947919 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd","Type":"ContainerStarted","Data":"5e495af25e395418128c51e910ef72ae6a4966db29958e2b1fc02c70abaf0de2"} Dec 01 03:12:27 crc kubenswrapper[4880]: I1201 03:12:27.949421 4880 generic.go:334] "Generic (PLEG): container finished" podID="ae5197a2-6d41-441b-b781-7b93ea7831f8" containerID="a366fccd938fbc0ecbfe77999f385db9c58fbab064a55cfd0aec9a88003d1997" exitCode=0 Dec 01 03:12:27 crc kubenswrapper[4880]: I1201 03:12:27.949653 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" event={"ID":"ae5197a2-6d41-441b-b781-7b93ea7831f8","Type":"ContainerDied","Data":"a366fccd938fbc0ecbfe77999f385db9c58fbab064a55cfd0aec9a88003d1997"} Dec 01 03:12:28 crc kubenswrapper[4880]: I1201 03:12:28.890902 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:28 crc kubenswrapper[4880]: I1201 03:12:28.944959 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:28 crc kubenswrapper[4880]: I1201 03:12:28.962036 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" event={"ID":"ae5197a2-6d41-441b-b781-7b93ea7831f8","Type":"ContainerStarted","Data":"861ab21707c5a233dccf644b4c19ecfd8a448e22256b77c22bc9766977eec2d9"} Dec 01 03:12:28 crc kubenswrapper[4880]: I1201 03:12:28.963850 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:28 crc kubenswrapper[4880]: I1201 03:12:28.989309 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" podStartSLOduration=4.938896552 podStartE2EDuration="4.989288708s" podCreationTimestamp="2025-12-01 03:12:24 +0000 UTC" firstStartedPulling="2025-12-01 03:12:26.802733518 +0000 UTC m=+976.313987890" lastFinishedPulling="2025-12-01 03:12:26.853125674 +0000 UTC m=+976.364380046" observedRunningTime="2025-12-01 03:12:28.984498907 +0000 UTC m=+978.495753279" watchObservedRunningTime="2025-12-01 03:12:28.989288708 +0000 UTC m=+978.500543070" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.013036 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.136834 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.190584 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.340102 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86f84f895-7n558"] Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.378671 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59f9c6bf9c-8jkhn"] Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.380105 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.383517 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.392716 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59f9c6bf9c-8jkhn"] Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.532907 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-ovsdbserver-nb\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.533055 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7n96\" (UniqueName: \"kubernetes.io/projected/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-kube-api-access-z7n96\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.533318 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-dns-svc\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.533343 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-config\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.597033 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-l6b8k"] Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.604037 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.607056 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.636539 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-dns-svc\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.636580 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-config\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.636633 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-ovsdbserver-nb\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.636656 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7n96\" (UniqueName: \"kubernetes.io/projected/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-kube-api-access-z7n96\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.638219 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-dns-svc\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.638343 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-config\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.638862 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-ovsdbserver-nb\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.654422 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7n96\" (UniqueName: \"kubernetes.io/projected/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-kube-api-access-z7n96\") pod \"dnsmasq-dns-59f9c6bf9c-8jkhn\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.702423 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.742077 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/85321879-afbc-4151-8d95-bbe649ab58f1-ovs-rundir\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.742268 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85321879-afbc-4151-8d95-bbe649ab58f1-config\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.742291 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85321879-afbc-4151-8d95-bbe649ab58f1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.742377 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/85321879-afbc-4151-8d95-bbe649ab58f1-ovn-rundir\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.742411 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm88q\" (UniqueName: \"kubernetes.io/projected/85321879-afbc-4151-8d95-bbe649ab58f1-kube-api-access-tm88q\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.742435 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85321879-afbc-4151-8d95-bbe649ab58f1-combined-ca-bundle\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.759441 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-l6b8k"] Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.843457 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/85321879-afbc-4151-8d95-bbe649ab58f1-ovn-rundir\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.843516 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm88q\" (UniqueName: \"kubernetes.io/projected/85321879-afbc-4151-8d95-bbe649ab58f1-kube-api-access-tm88q\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.843548 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85321879-afbc-4151-8d95-bbe649ab58f1-combined-ca-bundle\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.843573 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/85321879-afbc-4151-8d95-bbe649ab58f1-ovs-rundir\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.843599 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85321879-afbc-4151-8d95-bbe649ab58f1-config\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.843619 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85321879-afbc-4151-8d95-bbe649ab58f1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.843652 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:29 crc kubenswrapper[4880]: E1201 03:12:29.843802 4880 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 03:12:29 crc kubenswrapper[4880]: E1201 03:12:29.843816 4880 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 03:12:29 crc kubenswrapper[4880]: E1201 03:12:29.843860 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift podName:121483f7-5771-4af8-9777-b980cc1bd4ad nodeName:}" failed. No retries permitted until 2025-12-01 03:12:33.843846426 +0000 UTC m=+983.355100798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift") pod "swift-storage-0" (UID: "121483f7-5771-4af8-9777-b980cc1bd4ad") : configmap "swift-ring-files" not found Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.843897 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/85321879-afbc-4151-8d95-bbe649ab58f1-ovn-rundir\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.843927 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/85321879-afbc-4151-8d95-bbe649ab58f1-ovs-rundir\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.845035 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85321879-afbc-4151-8d95-bbe649ab58f1-config\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.897128 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85321879-afbc-4151-8d95-bbe649ab58f1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.898818 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85321879-afbc-4151-8d95-bbe649ab58f1-combined-ca-bundle\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.899425 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm88q\" (UniqueName: \"kubernetes.io/projected/85321879-afbc-4151-8d95-bbe649ab58f1-kube-api-access-tm88q\") pod \"ovn-controller-metrics-l6b8k\" (UID: \"85321879-afbc-4151-8d95-bbe649ab58f1\") " pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.983251 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.984352 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f84f895-7n558" event={"ID":"dec59321-c9ec-4666-8101-e03922db2a16","Type":"ContainerDied","Data":"b38c56adf5fec452f515e8d7eeb1830976b21543ca2319d95ba4d334a8c74d7f"} Dec 01 03:12:29 crc kubenswrapper[4880]: I1201 03:12:29.984712 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.024944 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qgczk"] Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.026148 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.029258 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.029487 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.030318 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.034646 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l6b8k" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.046631 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-dns-svc\") pod \"dec59321-c9ec-4666-8101-e03922db2a16\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.046793 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lptvv\" (UniqueName: \"kubernetes.io/projected/dec59321-c9ec-4666-8101-e03922db2a16-kube-api-access-lptvv\") pod \"dec59321-c9ec-4666-8101-e03922db2a16\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.046833 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-config\") pod \"dec59321-c9ec-4666-8101-e03922db2a16\" (UID: \"dec59321-c9ec-4666-8101-e03922db2a16\") " Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.048350 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-config" (OuterVolumeSpecName: "config") pod "dec59321-c9ec-4666-8101-e03922db2a16" (UID: "dec59321-c9ec-4666-8101-e03922db2a16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.049662 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dec59321-c9ec-4666-8101-e03922db2a16" (UID: "dec59321-c9ec-4666-8101-e03922db2a16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.071978 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cf454749-vbp7t"] Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.075105 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec59321-c9ec-4666-8101-e03922db2a16-kube-api-access-lptvv" (OuterVolumeSpecName: "kube-api-access-lptvv") pod "dec59321-c9ec-4666-8101-e03922db2a16" (UID: "dec59321-c9ec-4666-8101-e03922db2a16"). InnerVolumeSpecName "kube-api-access-lptvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.088648 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.149302 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qgczk"] Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.150171 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.150188 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lptvv\" (UniqueName: \"kubernetes.io/projected/dec59321-c9ec-4666-8101-e03922db2a16-kube-api-access-lptvv\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.150198 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec59321-c9ec-4666-8101-e03922db2a16-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.168951 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685cfc6bfc-2mb9m"] Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.170319 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.174578 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685cfc6bfc-2mb9m"] Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.175595 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.251000 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d3ae81e1-075b-46d6-a179-07ef619d49bd-etc-swift\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.251376 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-combined-ca-bundle\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.251422 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-swiftconf\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.251453 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-ring-data-devices\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.251470 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgwnr\" (UniqueName: \"kubernetes.io/projected/d3ae81e1-075b-46d6-a179-07ef619d49bd-kube-api-access-cgwnr\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.251499 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-dispersionconf\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.251633 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-scripts\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.342606 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59f9c6bf9c-8jkhn"] Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356039 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-swiftconf\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356102 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-config\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356125 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-ring-data-devices\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356142 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgwnr\" (UniqueName: \"kubernetes.io/projected/d3ae81e1-075b-46d6-a179-07ef619d49bd-kube-api-access-cgwnr\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356169 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6klc\" (UniqueName: \"kubernetes.io/projected/d24dae40-463d-4451-a741-bca4504d68e8-kube-api-access-s6klc\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356192 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-dispersionconf\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356220 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-scripts\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356241 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-nb\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356263 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-sb\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356307 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d3ae81e1-075b-46d6-a179-07ef619d49bd-etc-swift\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356345 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-combined-ca-bundle\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.356362 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-dns-svc\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.357076 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-ring-data-devices\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.357232 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-scripts\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.357617 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d3ae81e1-075b-46d6-a179-07ef619d49bd-etc-swift\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.364289 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-swiftconf\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.365021 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-combined-ca-bundle\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.371255 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-dispersionconf\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.382022 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgwnr\" (UniqueName: \"kubernetes.io/projected/d3ae81e1-075b-46d6-a179-07ef619d49bd-kube-api-access-cgwnr\") pod \"swift-ring-rebalance-qgczk\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.382610 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.392431 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.399781 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.400183 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.400406 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-gjxsz" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.400588 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.447998 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.452502 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.461042 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6klc\" (UniqueName: \"kubernetes.io/projected/d24dae40-463d-4451-a741-bca4504d68e8-kube-api-access-s6klc\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.461229 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-nb\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.461474 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-sb\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.462179 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-sb\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.463138 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-dns-svc\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.463430 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-config\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.465753 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-dns-svc\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.467674 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-config\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.468447 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-nb\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.497159 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6klc\" (UniqueName: \"kubernetes.io/projected/d24dae40-463d-4451-a741-bca4504d68e8-kube-api-access-s6klc\") pod \"dnsmasq-dns-685cfc6bfc-2mb9m\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.508563 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.568863 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.569309 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.569398 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-scripts\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.569426 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.569511 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-config\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.569528 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nndjp\" (UniqueName: \"kubernetes.io/projected/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-kube-api-access-nndjp\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.569557 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.588901 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-l6b8k"] Dec 01 03:12:30 crc kubenswrapper[4880]: W1201 03:12:30.614091 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85321879_afbc_4151_8d95_bbe649ab58f1.slice/crio-26bf83658279b658e8dac446dd34753e441beecb0a75a17c1c86658d4aa1afa1 WatchSource:0}: Error finding container 26bf83658279b658e8dac446dd34753e441beecb0a75a17c1c86658d4aa1afa1: Status 404 returned error can't find the container with id 26bf83658279b658e8dac446dd34753e441beecb0a75a17c1c86658d4aa1afa1 Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.675775 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.675814 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.675851 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-scripts\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.675989 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.676069 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-config\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.678298 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-config\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.679101 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nndjp\" (UniqueName: \"kubernetes.io/projected/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-kube-api-access-nndjp\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.679192 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.680054 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.683385 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-scripts\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.683385 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.683795 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.686768 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.700591 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nndjp\" (UniqueName: \"kubernetes.io/projected/4549b62e-b55f-4fc3-8ab5-e405062fbbe7-kube-api-access-nndjp\") pod \"ovn-northd-0\" (UID: \"4549b62e-b55f-4fc3-8ab5-e405062fbbe7\") " pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.742054 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.976271 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qgczk"] Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.994717 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l6b8k" event={"ID":"85321879-afbc-4151-8d95-bbe649ab58f1","Type":"ContainerStarted","Data":"b47b6b63b224b0323b3edae043f744c975bbd10ec822c5a5ba38964dbc309b3f"} Dec 01 03:12:30 crc kubenswrapper[4880]: I1201 03:12:30.994749 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l6b8k" event={"ID":"85321879-afbc-4151-8d95-bbe649ab58f1","Type":"ContainerStarted","Data":"26bf83658279b658e8dac446dd34753e441beecb0a75a17c1c86658d4aa1afa1"} Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.000385 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d7b466f3-1cab-4282-963d-2cf055d1514f","Type":"ContainerStarted","Data":"f0877f6eb973ad9dbf0a4f3dba4a79a8b3f449d6a63ad7c8fcb84bb8765bd5b7"} Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.001389 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qgczk" event={"ID":"d3ae81e1-075b-46d6-a179-07ef619d49bd","Type":"ContainerStarted","Data":"f365314c969af0b7d1dfbac193de4003b0def66967705149c75c2c09187cb057"} Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.002953 4880 generic.go:334] "Generic (PLEG): container finished" podID="fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" containerID="32e136d9505381e386ac4cf287e0546f3a9c9e2b9ff9ac3fdb8ac47e31d948c8" exitCode=0 Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.003614 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" event={"ID":"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e","Type":"ContainerDied","Data":"32e136d9505381e386ac4cf287e0546f3a9c9e2b9ff9ac3fdb8ac47e31d948c8"} Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.003648 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" event={"ID":"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e","Type":"ContainerStarted","Data":"b2ac3cb07ba522ac2b43aacafe81f7ec00770a0d57b47743875c8c6d26cc5063"} Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.004281 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f84f895-7n558" Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.020582 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-l6b8k" podStartSLOduration=2.020560806 podStartE2EDuration="2.020560806s" podCreationTimestamp="2025-12-01 03:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:12:31.016174476 +0000 UTC m=+980.527428858" watchObservedRunningTime="2025-12-01 03:12:31.020560806 +0000 UTC m=+980.531815178" Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.126443 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685cfc6bfc-2mb9m"] Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.177172 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86f84f895-7n558"] Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.195367 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86f84f895-7n558"] Dec 01 03:12:31 crc kubenswrapper[4880]: I1201 03:12:31.226623 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 01 03:12:31 crc kubenswrapper[4880]: W1201 03:12:31.236459 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4549b62e_b55f_4fc3_8ab5_e405062fbbe7.slice/crio-25f7f805eb271c74b6ca7f7070c38dcb45e1e221b3842fb195d33c21bace96d0 WatchSource:0}: Error finding container 25f7f805eb271c74b6ca7f7070c38dcb45e1e221b3842fb195d33c21bace96d0: Status 404 returned error can't find the container with id 25f7f805eb271c74b6ca7f7070c38dcb45e1e221b3842fb195d33c21bace96d0 Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.016735 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" event={"ID":"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e","Type":"ContainerStarted","Data":"cf6c2e7b4ec34799bd979280387a80f6c4759eca97a166b20b676816cd29684d"} Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.017048 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.023669 4880 generic.go:334] "Generic (PLEG): container finished" podID="94a77101-889e-41cf-adfd-a563ce823710" containerID="15ea21aa525739e8d8dd4643d97f9eabb30e7ae69faf27a63aec00b3927d65f2" exitCode=0 Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.023755 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"94a77101-889e-41cf-adfd-a563ce823710","Type":"ContainerDied","Data":"15ea21aa525739e8d8dd4643d97f9eabb30e7ae69faf27a63aec00b3927d65f2"} Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.025747 4880 generic.go:334] "Generic (PLEG): container finished" podID="d24dae40-463d-4451-a741-bca4504d68e8" containerID="0da4badd63cb2fb5ba60ff5d1ce772a6d082b2e56ddef4e4b96e37a940fb3386" exitCode=0 Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.025839 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" event={"ID":"d24dae40-463d-4451-a741-bca4504d68e8","Type":"ContainerDied","Data":"0da4badd63cb2fb5ba60ff5d1ce772a6d082b2e56ddef4e4b96e37a940fb3386"} Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.025900 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" event={"ID":"d24dae40-463d-4451-a741-bca4504d68e8","Type":"ContainerStarted","Data":"cb9ed498a03bd03996cb8433f573aa7b39e31b97ff88b87a38ce68f076fa10d6"} Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.027953 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4549b62e-b55f-4fc3-8ab5-e405062fbbe7","Type":"ContainerStarted","Data":"25f7f805eb271c74b6ca7f7070c38dcb45e1e221b3842fb195d33c21bace96d0"} Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.028977 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" podUID="ae5197a2-6d41-441b-b781-7b93ea7831f8" containerName="dnsmasq-dns" containerID="cri-o://861ab21707c5a233dccf644b4c19ecfd8a448e22256b77c22bc9766977eec2d9" gracePeriod=10 Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.037130 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" podStartSLOduration=3.037113604 podStartE2EDuration="3.037113604s" podCreationTimestamp="2025-12-01 03:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:12:32.033132804 +0000 UTC m=+981.544387186" watchObservedRunningTime="2025-12-01 03:12:32.037113604 +0000 UTC m=+981.548367976" Dec 01 03:12:32 crc kubenswrapper[4880]: I1201 03:12:32.796602 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec59321-c9ec-4666-8101-e03922db2a16" path="/var/lib/kubelet/pods/dec59321-c9ec-4666-8101-e03922db2a16/volumes" Dec 01 03:12:33 crc kubenswrapper[4880]: I1201 03:12:33.034430 4880 generic.go:334] "Generic (PLEG): container finished" podID="ae5197a2-6d41-441b-b781-7b93ea7831f8" containerID="861ab21707c5a233dccf644b4c19ecfd8a448e22256b77c22bc9766977eec2d9" exitCode=0 Dec 01 03:12:33 crc kubenswrapper[4880]: I1201 03:12:33.034941 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" event={"ID":"ae5197a2-6d41-441b-b781-7b93ea7831f8","Type":"ContainerDied","Data":"861ab21707c5a233dccf644b4c19ecfd8a448e22256b77c22bc9766977eec2d9"} Dec 01 03:12:33 crc kubenswrapper[4880]: I1201 03:12:33.944541 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:33 crc kubenswrapper[4880]: E1201 03:12:33.944791 4880 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 03:12:33 crc kubenswrapper[4880]: E1201 03:12:33.945036 4880 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 03:12:33 crc kubenswrapper[4880]: E1201 03:12:33.945086 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift podName:121483f7-5771-4af8-9777-b980cc1bd4ad nodeName:}" failed. No retries permitted until 2025-12-01 03:12:41.945069576 +0000 UTC m=+991.456323968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift") pod "swift-storage-0" (UID: "121483f7-5771-4af8-9777-b980cc1bd4ad") : configmap "swift-ring-files" not found Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.044059 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" event={"ID":"ae5197a2-6d41-441b-b781-7b93ea7831f8","Type":"ContainerDied","Data":"6b55699ec43ee02e244e3aba460ef89fda36097b360df67347103119fb178185"} Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.044148 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b55699ec43ee02e244e3aba460ef89fda36097b360df67347103119fb178185" Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.144292 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.256028 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc4tc\" (UniqueName: \"kubernetes.io/projected/ae5197a2-6d41-441b-b781-7b93ea7831f8-kube-api-access-dc4tc\") pod \"ae5197a2-6d41-441b-b781-7b93ea7831f8\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.256361 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-config\") pod \"ae5197a2-6d41-441b-b781-7b93ea7831f8\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.256558 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-dns-svc\") pod \"ae5197a2-6d41-441b-b781-7b93ea7831f8\" (UID: \"ae5197a2-6d41-441b-b781-7b93ea7831f8\") " Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.263022 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5197a2-6d41-441b-b781-7b93ea7831f8-kube-api-access-dc4tc" (OuterVolumeSpecName: "kube-api-access-dc4tc") pod "ae5197a2-6d41-441b-b781-7b93ea7831f8" (UID: "ae5197a2-6d41-441b-b781-7b93ea7831f8"). InnerVolumeSpecName "kube-api-access-dc4tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.332488 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae5197a2-6d41-441b-b781-7b93ea7831f8" (UID: "ae5197a2-6d41-441b-b781-7b93ea7831f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.333252 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-config" (OuterVolumeSpecName: "config") pod "ae5197a2-6d41-441b-b781-7b93ea7831f8" (UID: "ae5197a2-6d41-441b-b781-7b93ea7831f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.358773 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.359033 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc4tc\" (UniqueName: \"kubernetes.io/projected/ae5197a2-6d41-441b-b781-7b93ea7831f8-kube-api-access-dc4tc\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.359139 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5197a2-6d41-441b-b781-7b93ea7831f8-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:34 crc kubenswrapper[4880]: I1201 03:12:34.831931 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.053928 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"94a77101-889e-41cf-adfd-a563ce823710","Type":"ContainerStarted","Data":"bf2bd762cfa0e8726b098a7de53563ebdc1e707be0cb8d5845c06d0fb41209c2"} Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.055359 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qgczk" event={"ID":"d3ae81e1-075b-46d6-a179-07ef619d49bd","Type":"ContainerStarted","Data":"ce1e83364183b05286ce5c82e50c107fe3495ec624fd7ba9509344ecb1e51a78"} Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.057887 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" event={"ID":"d24dae40-463d-4451-a741-bca4504d68e8","Type":"ContainerStarted","Data":"b2a8866b63ee4e18b22aae7dcca6794436a6500d0d12df7e8e7c3b6544cd5cd8"} Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.058009 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.059663 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4549b62e-b55f-4fc3-8ab5-e405062fbbe7","Type":"ContainerStarted","Data":"7a515af7e36b7fa5ae752081935f6c97c4588e2070cfdd604cfc58b37847ce55"} Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.059714 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4549b62e-b55f-4fc3-8ab5-e405062fbbe7","Type":"ContainerStarted","Data":"5a3ecf7bc7b7aa3ff3209ce886185ad9dd6e40a4932ae1a65918db12059a8c05"} Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.059682 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cf454749-vbp7t" Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.059792 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.074817 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371990.77998 podStartE2EDuration="46.074796606s" podCreationTimestamp="2025-12-01 03:11:49 +0000 UTC" firstStartedPulling="2025-12-01 03:11:51.973056368 +0000 UTC m=+941.484310740" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:12:35.069745809 +0000 UTC m=+984.581000201" watchObservedRunningTime="2025-12-01 03:12:35.074796606 +0000 UTC m=+984.586050988" Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.090825 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qgczk" podStartSLOduration=3.077663122 podStartE2EDuration="6.090808608s" podCreationTimestamp="2025-12-01 03:12:29 +0000 UTC" firstStartedPulling="2025-12-01 03:12:30.989046175 +0000 UTC m=+980.500300547" lastFinishedPulling="2025-12-01 03:12:34.002191661 +0000 UTC m=+983.513446033" observedRunningTime="2025-12-01 03:12:35.083528616 +0000 UTC m=+984.594782988" watchObservedRunningTime="2025-12-01 03:12:35.090808608 +0000 UTC m=+984.602062980" Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.109641 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" podStartSLOduration=5.109621741 podStartE2EDuration="5.109621741s" podCreationTimestamp="2025-12-01 03:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:12:35.101527478 +0000 UTC m=+984.612781860" watchObservedRunningTime="2025-12-01 03:12:35.109621741 +0000 UTC m=+984.620876123" Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.119555 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cf454749-vbp7t"] Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.125232 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cf454749-vbp7t"] Dec 01 03:12:35 crc kubenswrapper[4880]: I1201 03:12:35.140900 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.447930873 podStartE2EDuration="5.140880756s" podCreationTimestamp="2025-12-01 03:12:30 +0000 UTC" firstStartedPulling="2025-12-01 03:12:31.240136562 +0000 UTC m=+980.751390934" lastFinishedPulling="2025-12-01 03:12:33.933086445 +0000 UTC m=+983.444340817" observedRunningTime="2025-12-01 03:12:35.134731042 +0000 UTC m=+984.645985434" watchObservedRunningTime="2025-12-01 03:12:35.140880756 +0000 UTC m=+984.652135128" Dec 01 03:12:35 crc kubenswrapper[4880]: E1201 03:12:35.627212 4880 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:51986->38.102.83.39:42095: write tcp 38.102.83.39:51986->38.102.83.39:42095: write: broken pipe Dec 01 03:12:36 crc kubenswrapper[4880]: I1201 03:12:36.803122 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5197a2-6d41-441b-b781-7b93ea7831f8" path="/var/lib/kubelet/pods/ae5197a2-6d41-441b-b781-7b93ea7831f8/volumes" Dec 01 03:12:39 crc kubenswrapper[4880]: I1201 03:12:39.704011 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:40 crc kubenswrapper[4880]: I1201 03:12:40.510023 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:12:40 crc kubenswrapper[4880]: I1201 03:12:40.672083 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f9c6bf9c-8jkhn"] Dec 01 03:12:40 crc kubenswrapper[4880]: I1201 03:12:40.672301 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" podUID="fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" containerName="dnsmasq-dns" containerID="cri-o://cf6c2e7b4ec34799bd979280387a80f6c4759eca97a166b20b676816cd29684d" gracePeriod=10 Dec 01 03:12:41 crc kubenswrapper[4880]: I1201 03:12:41.115974 4880 generic.go:334] "Generic (PLEG): container finished" podID="fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" containerID="cf6c2e7b4ec34799bd979280387a80f6c4759eca97a166b20b676816cd29684d" exitCode=0 Dec 01 03:12:41 crc kubenswrapper[4880]: I1201 03:12:41.116015 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" event={"ID":"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e","Type":"ContainerDied","Data":"cf6c2e7b4ec34799bd979280387a80f6c4759eca97a166b20b676816cd29684d"} Dec 01 03:12:41 crc kubenswrapper[4880]: I1201 03:12:41.272213 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 01 03:12:41 crc kubenswrapper[4880]: I1201 03:12:41.272252 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 01 03:12:41 crc kubenswrapper[4880]: I1201 03:12:41.383854 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 01 03:12:41 crc kubenswrapper[4880]: I1201 03:12:41.991120 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:41 crc kubenswrapper[4880]: E1201 03:12:41.991321 4880 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 03:12:41 crc kubenswrapper[4880]: E1201 03:12:41.991623 4880 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 03:12:41 crc kubenswrapper[4880]: E1201 03:12:41.991695 4880 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift podName:121483f7-5771-4af8-9777-b980cc1bd4ad nodeName:}" failed. No retries permitted until 2025-12-01 03:12:57.9916638 +0000 UTC m=+1007.502918172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift") pod "swift-storage-0" (UID: "121483f7-5771-4af8-9777-b980cc1bd4ad") : configmap "swift-ring-files" not found Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.275352 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.316214 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.397132 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-dns-svc\") pod \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.397362 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-ovsdbserver-nb\") pod \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.397402 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-config\") pod \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.397420 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7n96\" (UniqueName: \"kubernetes.io/projected/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-kube-api-access-z7n96\") pod \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\" (UID: \"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e\") " Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.436062 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-kube-api-access-z7n96" (OuterVolumeSpecName: "kube-api-access-z7n96") pod "fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" (UID: "fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e"). InnerVolumeSpecName "kube-api-access-z7n96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.445476 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-config" (OuterVolumeSpecName: "config") pod "fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" (UID: "fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.472734 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" (UID: "fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.477246 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" (UID: "fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.499360 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.499390 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.499402 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7n96\" (UniqueName: \"kubernetes.io/projected/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-kube-api-access-z7n96\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.499412 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.605736 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-sk25r"] Dec 01 03:12:42 crc kubenswrapper[4880]: E1201 03:12:42.606285 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5197a2-6d41-441b-b781-7b93ea7831f8" containerName="init" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.606302 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5197a2-6d41-441b-b781-7b93ea7831f8" containerName="init" Dec 01 03:12:42 crc kubenswrapper[4880]: E1201 03:12:42.606324 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" containerName="init" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.606330 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" containerName="init" Dec 01 03:12:42 crc kubenswrapper[4880]: E1201 03:12:42.606337 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" containerName="dnsmasq-dns" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.606343 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" containerName="dnsmasq-dns" Dec 01 03:12:42 crc kubenswrapper[4880]: E1201 03:12:42.606359 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5197a2-6d41-441b-b781-7b93ea7831f8" containerName="dnsmasq-dns" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.606364 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5197a2-6d41-441b-b781-7b93ea7831f8" containerName="dnsmasq-dns" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.606521 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5197a2-6d41-441b-b781-7b93ea7831f8" containerName="dnsmasq-dns" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.606544 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" containerName="dnsmasq-dns" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.607027 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.614900 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-sk25r"] Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.653399 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a5dd-account-create-update-zzwzw"] Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.654442 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.658967 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.670972 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a5dd-account-create-update-zzwzw"] Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.701471 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvq7\" (UniqueName: \"kubernetes.io/projected/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-kube-api-access-8fvq7\") pod \"keystone-a5dd-account-create-update-zzwzw\" (UID: \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\") " pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.701570 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f346c3-d584-4c71-8ee4-605e13ab1333-operator-scripts\") pod \"keystone-db-create-sk25r\" (UID: \"15f346c3-d584-4c71-8ee4-605e13ab1333\") " pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.701602 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twhf\" (UniqueName: \"kubernetes.io/projected/15f346c3-d584-4c71-8ee4-605e13ab1333-kube-api-access-5twhf\") pod \"keystone-db-create-sk25r\" (UID: \"15f346c3-d584-4c71-8ee4-605e13ab1333\") " pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.701636 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-operator-scripts\") pod \"keystone-a5dd-account-create-update-zzwzw\" (UID: \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\") " pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.802764 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f346c3-d584-4c71-8ee4-605e13ab1333-operator-scripts\") pod \"keystone-db-create-sk25r\" (UID: \"15f346c3-d584-4c71-8ee4-605e13ab1333\") " pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.802808 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5twhf\" (UniqueName: \"kubernetes.io/projected/15f346c3-d584-4c71-8ee4-605e13ab1333-kube-api-access-5twhf\") pod \"keystone-db-create-sk25r\" (UID: \"15f346c3-d584-4c71-8ee4-605e13ab1333\") " pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.802842 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-operator-scripts\") pod \"keystone-a5dd-account-create-update-zzwzw\" (UID: \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\") " pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.802918 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fvq7\" (UniqueName: \"kubernetes.io/projected/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-kube-api-access-8fvq7\") pod \"keystone-a5dd-account-create-update-zzwzw\" (UID: \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\") " pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.803781 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f346c3-d584-4c71-8ee4-605e13ab1333-operator-scripts\") pod \"keystone-db-create-sk25r\" (UID: \"15f346c3-d584-4c71-8ee4-605e13ab1333\") " pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.803876 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-operator-scripts\") pod \"keystone-a5dd-account-create-update-zzwzw\" (UID: \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\") " pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.821507 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fvq7\" (UniqueName: \"kubernetes.io/projected/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-kube-api-access-8fvq7\") pod \"keystone-a5dd-account-create-update-zzwzw\" (UID: \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\") " pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.827488 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5twhf\" (UniqueName: \"kubernetes.io/projected/15f346c3-d584-4c71-8ee4-605e13ab1333-kube-api-access-5twhf\") pod \"keystone-db-create-sk25r\" (UID: \"15f346c3-d584-4c71-8ee4-605e13ab1333\") " pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.887817 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-ps5qj"] Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.901092 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.919542 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ps5qj"] Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.925358 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.936948 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-957c-account-create-update-l5xmj"] Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.937863 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.944977 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.963682 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-957c-account-create-update-l5xmj"] Dec 01 03:12:42 crc kubenswrapper[4880]: I1201 03:12:42.968406 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.007749 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8526178-42ba-4e79-bbe7-e52de3593c59-operator-scripts\") pod \"placement-db-create-ps5qj\" (UID: \"d8526178-42ba-4e79-bbe7-e52de3593c59\") " pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.007810 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7qs\" (UniqueName: \"kubernetes.io/projected/d8526178-42ba-4e79-bbe7-e52de3593c59-kube-api-access-zd7qs\") pod \"placement-db-create-ps5qj\" (UID: \"d8526178-42ba-4e79-bbe7-e52de3593c59\") " pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.007838 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5hz\" (UniqueName: \"kubernetes.io/projected/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-kube-api-access-wx5hz\") pod \"placement-957c-account-create-update-l5xmj\" (UID: \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\") " pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.007905 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-operator-scripts\") pod \"placement-957c-account-create-update-l5xmj\" (UID: \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\") " pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.109500 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8526178-42ba-4e79-bbe7-e52de3593c59-operator-scripts\") pod \"placement-db-create-ps5qj\" (UID: \"d8526178-42ba-4e79-bbe7-e52de3593c59\") " pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.109539 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd7qs\" (UniqueName: \"kubernetes.io/projected/d8526178-42ba-4e79-bbe7-e52de3593c59-kube-api-access-zd7qs\") pod \"placement-db-create-ps5qj\" (UID: \"d8526178-42ba-4e79-bbe7-e52de3593c59\") " pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.109565 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5hz\" (UniqueName: \"kubernetes.io/projected/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-kube-api-access-wx5hz\") pod \"placement-957c-account-create-update-l5xmj\" (UID: \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\") " pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.109619 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-operator-scripts\") pod \"placement-957c-account-create-update-l5xmj\" (UID: \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\") " pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.110578 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-operator-scripts\") pod \"placement-957c-account-create-update-l5xmj\" (UID: \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\") " pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.111041 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8526178-42ba-4e79-bbe7-e52de3593c59-operator-scripts\") pod \"placement-db-create-ps5qj\" (UID: \"d8526178-42ba-4e79-bbe7-e52de3593c59\") " pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.131475 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5hz\" (UniqueName: \"kubernetes.io/projected/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-kube-api-access-wx5hz\") pod \"placement-957c-account-create-update-l5xmj\" (UID: \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\") " pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.131690 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd7qs\" (UniqueName: \"kubernetes.io/projected/d8526178-42ba-4e79-bbe7-e52de3593c59-kube-api-access-zd7qs\") pod \"placement-db-create-ps5qj\" (UID: \"d8526178-42ba-4e79-bbe7-e52de3593c59\") " pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.161751 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" event={"ID":"fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e","Type":"ContainerDied","Data":"b2ac3cb07ba522ac2b43aacafe81f7ec00770a0d57b47743875c8c6d26cc5063"} Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.161799 4880 scope.go:117] "RemoveContainer" containerID="cf6c2e7b4ec34799bd979280387a80f6c4759eca97a166b20b676816cd29684d" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.161918 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f9c6bf9c-8jkhn" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.166044 4880 generic.go:334] "Generic (PLEG): container finished" podID="d3ae81e1-075b-46d6-a179-07ef619d49bd" containerID="ce1e83364183b05286ce5c82e50c107fe3495ec624fd7ba9509344ecb1e51a78" exitCode=0 Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.166611 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qgczk" event={"ID":"d3ae81e1-075b-46d6-a179-07ef619d49bd","Type":"ContainerDied","Data":"ce1e83364183b05286ce5c82e50c107fe3495ec624fd7ba9509344ecb1e51a78"} Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.185689 4880 scope.go:117] "RemoveContainer" containerID="32e136d9505381e386ac4cf287e0546f3a9c9e2b9ff9ac3fdb8ac47e31d948c8" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.212992 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f9c6bf9c-8jkhn"] Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.227202 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.230589 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59f9c6bf9c-8jkhn"] Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.244683 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4mtxq"] Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.245691 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.252842 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4mtxq"] Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.270085 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.317947 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb221610-a34c-4fd4-9789-9cf67fb330c7-operator-scripts\") pod \"glance-db-create-4mtxq\" (UID: \"eb221610-a34c-4fd4-9789-9cf67fb330c7\") " pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.318068 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jw5c\" (UniqueName: \"kubernetes.io/projected/eb221610-a34c-4fd4-9789-9cf67fb330c7-kube-api-access-2jw5c\") pod \"glance-db-create-4mtxq\" (UID: \"eb221610-a34c-4fd4-9789-9cf67fb330c7\") " pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.353452 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2a47-account-create-update-6l74s"] Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.354707 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.357174 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.378789 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2a47-account-create-update-6l74s"] Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.421230 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jw5c\" (UniqueName: \"kubernetes.io/projected/eb221610-a34c-4fd4-9789-9cf67fb330c7-kube-api-access-2jw5c\") pod \"glance-db-create-4mtxq\" (UID: \"eb221610-a34c-4fd4-9789-9cf67fb330c7\") " pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.421316 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb221610-a34c-4fd4-9789-9cf67fb330c7-operator-scripts\") pod \"glance-db-create-4mtxq\" (UID: \"eb221610-a34c-4fd4-9789-9cf67fb330c7\") " pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.421342 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dda7a8-466c-4937-a3ba-dda232572d97-operator-scripts\") pod \"glance-2a47-account-create-update-6l74s\" (UID: \"56dda7a8-466c-4937-a3ba-dda232572d97\") " pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.421377 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d2tv\" (UniqueName: \"kubernetes.io/projected/56dda7a8-466c-4937-a3ba-dda232572d97-kube-api-access-6d2tv\") pod \"glance-2a47-account-create-update-6l74s\" (UID: \"56dda7a8-466c-4937-a3ba-dda232572d97\") " pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.422386 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb221610-a34c-4fd4-9789-9cf67fb330c7-operator-scripts\") pod \"glance-db-create-4mtxq\" (UID: \"eb221610-a34c-4fd4-9789-9cf67fb330c7\") " pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.443461 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-sk25r"] Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.462547 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jw5c\" (UniqueName: \"kubernetes.io/projected/eb221610-a34c-4fd4-9789-9cf67fb330c7-kube-api-access-2jw5c\") pod \"glance-db-create-4mtxq\" (UID: \"eb221610-a34c-4fd4-9789-9cf67fb330c7\") " pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.517427 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a5dd-account-create-update-zzwzw"] Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.525142 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dda7a8-466c-4937-a3ba-dda232572d97-operator-scripts\") pod \"glance-2a47-account-create-update-6l74s\" (UID: \"56dda7a8-466c-4937-a3ba-dda232572d97\") " pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.525201 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d2tv\" (UniqueName: \"kubernetes.io/projected/56dda7a8-466c-4937-a3ba-dda232572d97-kube-api-access-6d2tv\") pod \"glance-2a47-account-create-update-6l74s\" (UID: \"56dda7a8-466c-4937-a3ba-dda232572d97\") " pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.526135 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dda7a8-466c-4937-a3ba-dda232572d97-operator-scripts\") pod \"glance-2a47-account-create-update-6l74s\" (UID: \"56dda7a8-466c-4937-a3ba-dda232572d97\") " pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.543248 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d2tv\" (UniqueName: \"kubernetes.io/projected/56dda7a8-466c-4937-a3ba-dda232572d97-kube-api-access-6d2tv\") pod \"glance-2a47-account-create-update-6l74s\" (UID: \"56dda7a8-466c-4937-a3ba-dda232572d97\") " pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.681272 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.697097 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.793469 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-957c-account-create-update-l5xmj"] Dec 01 03:12:43 crc kubenswrapper[4880]: W1201 03:12:43.816247 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4bd4dff_a4e3_4e6f_98f3_4f8a00463f5a.slice/crio-a07aee52b48f63e9c4d4d491819d64dd143d148f172fdbe84398e78bc8698f49 WatchSource:0}: Error finding container a07aee52b48f63e9c4d4d491819d64dd143d148f172fdbe84398e78bc8698f49: Status 404 returned error can't find the container with id a07aee52b48f63e9c4d4d491819d64dd143d148f172fdbe84398e78bc8698f49 Dec 01 03:12:43 crc kubenswrapper[4880]: I1201 03:12:43.867926 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ps5qj"] Dec 01 03:12:43 crc kubenswrapper[4880]: W1201 03:12:43.876775 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8526178_42ba_4e79_bbe7_e52de3593c59.slice/crio-4543c1a242036acf4c1670d2df8609b0dbcb6fe898716487a083cdf747a73d2e WatchSource:0}: Error finding container 4543c1a242036acf4c1670d2df8609b0dbcb6fe898716487a083cdf747a73d2e: Status 404 returned error can't find the container with id 4543c1a242036acf4c1670d2df8609b0dbcb6fe898716487a083cdf747a73d2e Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.187164 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4mtxq"] Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.188183 4880 generic.go:334] "Generic (PLEG): container finished" podID="15f346c3-d584-4c71-8ee4-605e13ab1333" containerID="a55ded1176d47ac89be030b8ce31e8fbc696b484cef607baa1cdd27860f7a00a" exitCode=0 Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.188267 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-sk25r" event={"ID":"15f346c3-d584-4c71-8ee4-605e13ab1333","Type":"ContainerDied","Data":"a55ded1176d47ac89be030b8ce31e8fbc696b484cef607baa1cdd27860f7a00a"} Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.188294 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-sk25r" event={"ID":"15f346c3-d584-4c71-8ee4-605e13ab1333","Type":"ContainerStarted","Data":"26e3528cb18cec461342d12e82efc3556f4b6198c715c6a0a97f0d794b370a4c"} Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.195418 4880 generic.go:334] "Generic (PLEG): container finished" podID="06daccda-d4f8-43c3-8a2d-b2ebad9e89be" containerID="c64e32e64219008c9f8d250cf58b5f9e40a8b18b4c42fd5ac4b716867172e387" exitCode=0 Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.195476 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a5dd-account-create-update-zzwzw" event={"ID":"06daccda-d4f8-43c3-8a2d-b2ebad9e89be","Type":"ContainerDied","Data":"c64e32e64219008c9f8d250cf58b5f9e40a8b18b4c42fd5ac4b716867172e387"} Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.195498 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a5dd-account-create-update-zzwzw" event={"ID":"06daccda-d4f8-43c3-8a2d-b2ebad9e89be","Type":"ContainerStarted","Data":"85ce846b773837ed463828b7b56a934d6fd1227f7b8f481f7e687dd40dbcc29e"} Dec 01 03:12:44 crc kubenswrapper[4880]: W1201 03:12:44.197002 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb221610_a34c_4fd4_9789_9cf67fb330c7.slice/crio-7216a5e25ca106a89e47a834528fb6082499d7ded7fe9736d9dc2d94b1da410d WatchSource:0}: Error finding container 7216a5e25ca106a89e47a834528fb6082499d7ded7fe9736d9dc2d94b1da410d: Status 404 returned error can't find the container with id 7216a5e25ca106a89e47a834528fb6082499d7ded7fe9736d9dc2d94b1da410d Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.198120 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ps5qj" event={"ID":"d8526178-42ba-4e79-bbe7-e52de3593c59","Type":"ContainerStarted","Data":"6204f7b967c846e7a9a684207a57b4825b0ed6db6517ef47744618ea8b0c9fd7"} Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.198143 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ps5qj" event={"ID":"d8526178-42ba-4e79-bbe7-e52de3593c59","Type":"ContainerStarted","Data":"4543c1a242036acf4c1670d2df8609b0dbcb6fe898716487a083cdf747a73d2e"} Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.207641 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-957c-account-create-update-l5xmj" event={"ID":"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a","Type":"ContainerStarted","Data":"1f0a64783561e25499bb51dce0dcddb66ba691e8f7c8460ce8d7520bfd02659e"} Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.207677 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-957c-account-create-update-l5xmj" event={"ID":"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a","Type":"ContainerStarted","Data":"a07aee52b48f63e9c4d4d491819d64dd143d148f172fdbe84398e78bc8698f49"} Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.247229 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-ps5qj" podStartSLOduration=2.247212815 podStartE2EDuration="2.247212815s" podCreationTimestamp="2025-12-01 03:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:12:44.237374538 +0000 UTC m=+993.748628910" watchObservedRunningTime="2025-12-01 03:12:44.247212815 +0000 UTC m=+993.758467187" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.273133 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-957c-account-create-update-l5xmj" podStartSLOduration=2.273116415 podStartE2EDuration="2.273116415s" podCreationTimestamp="2025-12-01 03:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:12:44.272234563 +0000 UTC m=+993.783488935" watchObservedRunningTime="2025-12-01 03:12:44.273116415 +0000 UTC m=+993.784370777" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.302050 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2a47-account-create-update-6l74s"] Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.585849 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.642358 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-scripts\") pod \"d3ae81e1-075b-46d6-a179-07ef619d49bd\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.642428 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-ring-data-devices\") pod \"d3ae81e1-075b-46d6-a179-07ef619d49bd\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.642452 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-dispersionconf\") pod \"d3ae81e1-075b-46d6-a179-07ef619d49bd\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.642524 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d3ae81e1-075b-46d6-a179-07ef619d49bd-etc-swift\") pod \"d3ae81e1-075b-46d6-a179-07ef619d49bd\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.642593 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-combined-ca-bundle\") pod \"d3ae81e1-075b-46d6-a179-07ef619d49bd\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.642636 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgwnr\" (UniqueName: \"kubernetes.io/projected/d3ae81e1-075b-46d6-a179-07ef619d49bd-kube-api-access-cgwnr\") pod \"d3ae81e1-075b-46d6-a179-07ef619d49bd\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.642694 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-swiftconf\") pod \"d3ae81e1-075b-46d6-a179-07ef619d49bd\" (UID: \"d3ae81e1-075b-46d6-a179-07ef619d49bd\") " Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.643032 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d3ae81e1-075b-46d6-a179-07ef619d49bd" (UID: "d3ae81e1-075b-46d6-a179-07ef619d49bd"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.643649 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ae81e1-075b-46d6-a179-07ef619d49bd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d3ae81e1-075b-46d6-a179-07ef619d49bd" (UID: "d3ae81e1-075b-46d6-a179-07ef619d49bd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.652404 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d3ae81e1-075b-46d6-a179-07ef619d49bd" (UID: "d3ae81e1-075b-46d6-a179-07ef619d49bd"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.653272 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ae81e1-075b-46d6-a179-07ef619d49bd-kube-api-access-cgwnr" (OuterVolumeSpecName: "kube-api-access-cgwnr") pod "d3ae81e1-075b-46d6-a179-07ef619d49bd" (UID: "d3ae81e1-075b-46d6-a179-07ef619d49bd"). InnerVolumeSpecName "kube-api-access-cgwnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.663512 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-scripts" (OuterVolumeSpecName: "scripts") pod "d3ae81e1-075b-46d6-a179-07ef619d49bd" (UID: "d3ae81e1-075b-46d6-a179-07ef619d49bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.665196 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d3ae81e1-075b-46d6-a179-07ef619d49bd" (UID: "d3ae81e1-075b-46d6-a179-07ef619d49bd"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.668597 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ae81e1-075b-46d6-a179-07ef619d49bd" (UID: "d3ae81e1-075b-46d6-a179-07ef619d49bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.744640 4880 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d3ae81e1-075b-46d6-a179-07ef619d49bd-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.744669 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.744680 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgwnr\" (UniqueName: \"kubernetes.io/projected/d3ae81e1-075b-46d6-a179-07ef619d49bd-kube-api-access-cgwnr\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.744689 4880 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.744703 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.744711 4880 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d3ae81e1-075b-46d6-a179-07ef619d49bd-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.744719 4880 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d3ae81e1-075b-46d6-a179-07ef619d49bd-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:44 crc kubenswrapper[4880]: I1201 03:12:44.792925 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e" path="/var/lib/kubelet/pods/fbcf088b-4b4e-4a10-9ee9-c41b4b9aec9e/volumes" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.217836 4880 generic.go:334] "Generic (PLEG): container finished" podID="d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a" containerID="1f0a64783561e25499bb51dce0dcddb66ba691e8f7c8460ce8d7520bfd02659e" exitCode=0 Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.218664 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-957c-account-create-update-l5xmj" event={"ID":"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a","Type":"ContainerDied","Data":"1f0a64783561e25499bb51dce0dcddb66ba691e8f7c8460ce8d7520bfd02659e"} Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.228801 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qgczk" event={"ID":"d3ae81e1-075b-46d6-a179-07ef619d49bd","Type":"ContainerDied","Data":"f365314c969af0b7d1dfbac193de4003b0def66967705149c75c2c09187cb057"} Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.228845 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f365314c969af0b7d1dfbac193de4003b0def66967705149c75c2c09187cb057" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.228948 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qgczk" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.237155 4880 generic.go:334] "Generic (PLEG): container finished" podID="eb221610-a34c-4fd4-9789-9cf67fb330c7" containerID="362af1ed8776c66f59098e97377a2e613a54789eb0e6eebaa83f0ee7ccb65306" exitCode=0 Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.237238 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4mtxq" event={"ID":"eb221610-a34c-4fd4-9789-9cf67fb330c7","Type":"ContainerDied","Data":"362af1ed8776c66f59098e97377a2e613a54789eb0e6eebaa83f0ee7ccb65306"} Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.237267 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4mtxq" event={"ID":"eb221610-a34c-4fd4-9789-9cf67fb330c7","Type":"ContainerStarted","Data":"7216a5e25ca106a89e47a834528fb6082499d7ded7fe9736d9dc2d94b1da410d"} Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.240295 4880 generic.go:334] "Generic (PLEG): container finished" podID="56dda7a8-466c-4937-a3ba-dda232572d97" containerID="68214ba28a236cfcda1b4faa9f55a39a79dfa543b3fc94dad24e802ab74d8ed6" exitCode=0 Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.240374 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2a47-account-create-update-6l74s" event={"ID":"56dda7a8-466c-4937-a3ba-dda232572d97","Type":"ContainerDied","Data":"68214ba28a236cfcda1b4faa9f55a39a79dfa543b3fc94dad24e802ab74d8ed6"} Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.240406 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2a47-account-create-update-6l74s" event={"ID":"56dda7a8-466c-4937-a3ba-dda232572d97","Type":"ContainerStarted","Data":"19c07805044a888b0f6810c30ebb353b437917de63c777ab826e3bbdd9155660"} Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.254395 4880 generic.go:334] "Generic (PLEG): container finished" podID="d8526178-42ba-4e79-bbe7-e52de3593c59" containerID="6204f7b967c846e7a9a684207a57b4825b0ed6db6517ef47744618ea8b0c9fd7" exitCode=0 Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.254624 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ps5qj" event={"ID":"d8526178-42ba-4e79-bbe7-e52de3593c59","Type":"ContainerDied","Data":"6204f7b967c846e7a9a684207a57b4825b0ed6db6517ef47744618ea8b0c9fd7"} Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.659352 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.680143 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5twhf\" (UniqueName: \"kubernetes.io/projected/15f346c3-d584-4c71-8ee4-605e13ab1333-kube-api-access-5twhf\") pod \"15f346c3-d584-4c71-8ee4-605e13ab1333\" (UID: \"15f346c3-d584-4c71-8ee4-605e13ab1333\") " Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.680467 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f346c3-d584-4c71-8ee4-605e13ab1333-operator-scripts\") pod \"15f346c3-d584-4c71-8ee4-605e13ab1333\" (UID: \"15f346c3-d584-4c71-8ee4-605e13ab1333\") " Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.682085 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f346c3-d584-4c71-8ee4-605e13ab1333-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15f346c3-d584-4c71-8ee4-605e13ab1333" (UID: "15f346c3-d584-4c71-8ee4-605e13ab1333"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.697541 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f346c3-d584-4c71-8ee4-605e13ab1333-kube-api-access-5twhf" (OuterVolumeSpecName: "kube-api-access-5twhf") pod "15f346c3-d584-4c71-8ee4-605e13ab1333" (UID: "15f346c3-d584-4c71-8ee4-605e13ab1333"). InnerVolumeSpecName "kube-api-access-5twhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.768820 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.782798 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fvq7\" (UniqueName: \"kubernetes.io/projected/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-kube-api-access-8fvq7\") pod \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\" (UID: \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\") " Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.783084 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-operator-scripts\") pod \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\" (UID: \"06daccda-d4f8-43c3-8a2d-b2ebad9e89be\") " Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.783640 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06daccda-d4f8-43c3-8a2d-b2ebad9e89be" (UID: "06daccda-d4f8-43c3-8a2d-b2ebad9e89be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.785521 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5twhf\" (UniqueName: \"kubernetes.io/projected/15f346c3-d584-4c71-8ee4-605e13ab1333-kube-api-access-5twhf\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.785630 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f346c3-d584-4c71-8ee4-605e13ab1333-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.787901 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-kube-api-access-8fvq7" (OuterVolumeSpecName: "kube-api-access-8fvq7") pod "06daccda-d4f8-43c3-8a2d-b2ebad9e89be" (UID: "06daccda-d4f8-43c3-8a2d-b2ebad9e89be"). InnerVolumeSpecName "kube-api-access-8fvq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.813700 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.887471 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fvq7\" (UniqueName: \"kubernetes.io/projected/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-kube-api-access-8fvq7\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:45 crc kubenswrapper[4880]: I1201 03:12:45.887505 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06daccda-d4f8-43c3-8a2d-b2ebad9e89be-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.261669 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-sk25r" event={"ID":"15f346c3-d584-4c71-8ee4-605e13ab1333","Type":"ContainerDied","Data":"26e3528cb18cec461342d12e82efc3556f4b6198c715c6a0a97f0d794b370a4c"} Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.261688 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-sk25r" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.261709 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e3528cb18cec461342d12e82efc3556f4b6198c715c6a0a97f0d794b370a4c" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.263132 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a5dd-account-create-update-zzwzw" event={"ID":"06daccda-d4f8-43c3-8a2d-b2ebad9e89be","Type":"ContainerDied","Data":"85ce846b773837ed463828b7b56a934d6fd1227f7b8f481f7e687dd40dbcc29e"} Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.263155 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ce846b773837ed463828b7b56a934d6fd1227f7b8f481f7e687dd40dbcc29e" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.263194 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a5dd-account-create-update-zzwzw" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.589055 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.602482 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jw5c\" (UniqueName: \"kubernetes.io/projected/eb221610-a34c-4fd4-9789-9cf67fb330c7-kube-api-access-2jw5c\") pod \"eb221610-a34c-4fd4-9789-9cf67fb330c7\" (UID: \"eb221610-a34c-4fd4-9789-9cf67fb330c7\") " Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.602602 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb221610-a34c-4fd4-9789-9cf67fb330c7-operator-scripts\") pod \"eb221610-a34c-4fd4-9789-9cf67fb330c7\" (UID: \"eb221610-a34c-4fd4-9789-9cf67fb330c7\") " Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.603194 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb221610-a34c-4fd4-9789-9cf67fb330c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb221610-a34c-4fd4-9789-9cf67fb330c7" (UID: "eb221610-a34c-4fd4-9789-9cf67fb330c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.604008 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb221610-a34c-4fd4-9789-9cf67fb330c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.621114 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb221610-a34c-4fd4-9789-9cf67fb330c7-kube-api-access-2jw5c" (OuterVolumeSpecName: "kube-api-access-2jw5c") pod "eb221610-a34c-4fd4-9789-9cf67fb330c7" (UID: "eb221610-a34c-4fd4-9789-9cf67fb330c7"). InnerVolumeSpecName "kube-api-access-2jw5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.709296 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jw5c\" (UniqueName: \"kubernetes.io/projected/eb221610-a34c-4fd4-9789-9cf67fb330c7-kube-api-access-2jw5c\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.773911 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.775856 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.810785 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx5hz\" (UniqueName: \"kubernetes.io/projected/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-kube-api-access-wx5hz\") pod \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\" (UID: \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\") " Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.810825 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8526178-42ba-4e79-bbe7-e52de3593c59-operator-scripts\") pod \"d8526178-42ba-4e79-bbe7-e52de3593c59\" (UID: \"d8526178-42ba-4e79-bbe7-e52de3593c59\") " Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.810913 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7qs\" (UniqueName: \"kubernetes.io/projected/d8526178-42ba-4e79-bbe7-e52de3593c59-kube-api-access-zd7qs\") pod \"d8526178-42ba-4e79-bbe7-e52de3593c59\" (UID: \"d8526178-42ba-4e79-bbe7-e52de3593c59\") " Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.811057 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-operator-scripts\") pod \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\" (UID: \"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a\") " Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.811309 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8526178-42ba-4e79-bbe7-e52de3593c59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8526178-42ba-4e79-bbe7-e52de3593c59" (UID: "d8526178-42ba-4e79-bbe7-e52de3593c59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.811517 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8526178-42ba-4e79-bbe7-e52de3593c59-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.811546 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a" (UID: "d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.815293 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8526178-42ba-4e79-bbe7-e52de3593c59-kube-api-access-zd7qs" (OuterVolumeSpecName: "kube-api-access-zd7qs") pod "d8526178-42ba-4e79-bbe7-e52de3593c59" (UID: "d8526178-42ba-4e79-bbe7-e52de3593c59"). InnerVolumeSpecName "kube-api-access-zd7qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.828312 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-kube-api-access-wx5hz" (OuterVolumeSpecName: "kube-api-access-wx5hz") pod "d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a" (UID: "d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a"). InnerVolumeSpecName "kube-api-access-wx5hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.834647 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.921481 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d2tv\" (UniqueName: \"kubernetes.io/projected/56dda7a8-466c-4937-a3ba-dda232572d97-kube-api-access-6d2tv\") pod \"56dda7a8-466c-4937-a3ba-dda232572d97\" (UID: \"56dda7a8-466c-4937-a3ba-dda232572d97\") " Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.921741 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dda7a8-466c-4937-a3ba-dda232572d97-operator-scripts\") pod \"56dda7a8-466c-4937-a3ba-dda232572d97\" (UID: \"56dda7a8-466c-4937-a3ba-dda232572d97\") " Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.922053 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.922064 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx5hz\" (UniqueName: \"kubernetes.io/projected/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a-kube-api-access-wx5hz\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.922075 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd7qs\" (UniqueName: \"kubernetes.io/projected/d8526178-42ba-4e79-bbe7-e52de3593c59-kube-api-access-zd7qs\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.924308 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dda7a8-466c-4937-a3ba-dda232572d97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56dda7a8-466c-4937-a3ba-dda232572d97" (UID: "56dda7a8-466c-4937-a3ba-dda232572d97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:12:46 crc kubenswrapper[4880]: I1201 03:12:46.927501 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dda7a8-466c-4937-a3ba-dda232572d97-kube-api-access-6d2tv" (OuterVolumeSpecName: "kube-api-access-6d2tv") pod "56dda7a8-466c-4937-a3ba-dda232572d97" (UID: "56dda7a8-466c-4937-a3ba-dda232572d97"). InnerVolumeSpecName "kube-api-access-6d2tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.023219 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dda7a8-466c-4937-a3ba-dda232572d97-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.023246 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d2tv\" (UniqueName: \"kubernetes.io/projected/56dda7a8-466c-4937-a3ba-dda232572d97-kube-api-access-6d2tv\") on node \"crc\" DevicePath \"\"" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.273837 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-957c-account-create-update-l5xmj" event={"ID":"d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a","Type":"ContainerDied","Data":"a07aee52b48f63e9c4d4d491819d64dd143d148f172fdbe84398e78bc8698f49"} Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.273901 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a07aee52b48f63e9c4d4d491819d64dd143d148f172fdbe84398e78bc8698f49" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.273963 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-957c-account-create-update-l5xmj" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.287336 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4mtxq" event={"ID":"eb221610-a34c-4fd4-9789-9cf67fb330c7","Type":"ContainerDied","Data":"7216a5e25ca106a89e47a834528fb6082499d7ded7fe9736d9dc2d94b1da410d"} Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.287373 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7216a5e25ca106a89e47a834528fb6082499d7ded7fe9736d9dc2d94b1da410d" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.287431 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4mtxq" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.289756 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2a47-account-create-update-6l74s" event={"ID":"56dda7a8-466c-4937-a3ba-dda232572d97","Type":"ContainerDied","Data":"19c07805044a888b0f6810c30ebb353b437917de63c777ab826e3bbdd9155660"} Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.289779 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c07805044a888b0f6810c30ebb353b437917de63c777ab826e3bbdd9155660" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.289810 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2a47-account-create-update-6l74s" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.294374 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ps5qj" event={"ID":"d8526178-42ba-4e79-bbe7-e52de3593c59","Type":"ContainerDied","Data":"4543c1a242036acf4c1670d2df8609b0dbcb6fe898716487a083cdf747a73d2e"} Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.294420 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4543c1a242036acf4c1670d2df8609b0dbcb6fe898716487a083cdf747a73d2e" Dec 01 03:12:47 crc kubenswrapper[4880]: I1201 03:12:47.294486 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ps5qj" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.532164 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mgfdp"] Dec 01 03:12:48 crc kubenswrapper[4880]: E1201 03:12:48.533707 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb221610-a34c-4fd4-9789-9cf67fb330c7" containerName="mariadb-database-create" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.533774 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb221610-a34c-4fd4-9789-9cf67fb330c7" containerName="mariadb-database-create" Dec 01 03:12:48 crc kubenswrapper[4880]: E1201 03:12:48.533835 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f346c3-d584-4c71-8ee4-605e13ab1333" containerName="mariadb-database-create" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.533904 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f346c3-d584-4c71-8ee4-605e13ab1333" containerName="mariadb-database-create" Dec 01 03:12:48 crc kubenswrapper[4880]: E1201 03:12:48.533988 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8526178-42ba-4e79-bbe7-e52de3593c59" containerName="mariadb-database-create" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.534063 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8526178-42ba-4e79-bbe7-e52de3593c59" containerName="mariadb-database-create" Dec 01 03:12:48 crc kubenswrapper[4880]: E1201 03:12:48.534127 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06daccda-d4f8-43c3-8a2d-b2ebad9e89be" containerName="mariadb-account-create-update" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.534176 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="06daccda-d4f8-43c3-8a2d-b2ebad9e89be" containerName="mariadb-account-create-update" Dec 01 03:12:48 crc kubenswrapper[4880]: E1201 03:12:48.534241 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a" containerName="mariadb-account-create-update" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.534302 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a" containerName="mariadb-account-create-update" Dec 01 03:12:48 crc kubenswrapper[4880]: E1201 03:12:48.534361 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dda7a8-466c-4937-a3ba-dda232572d97" containerName="mariadb-account-create-update" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.534416 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dda7a8-466c-4937-a3ba-dda232572d97" containerName="mariadb-account-create-update" Dec 01 03:12:48 crc kubenswrapper[4880]: E1201 03:12:48.534473 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ae81e1-075b-46d6-a179-07ef619d49bd" containerName="swift-ring-rebalance" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.534521 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ae81e1-075b-46d6-a179-07ef619d49bd" containerName="swift-ring-rebalance" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.534707 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ae81e1-075b-46d6-a179-07ef619d49bd" containerName="swift-ring-rebalance" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.534767 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="56dda7a8-466c-4937-a3ba-dda232572d97" containerName="mariadb-account-create-update" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.534835 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f346c3-d584-4c71-8ee4-605e13ab1333" containerName="mariadb-database-create" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.534929 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="06daccda-d4f8-43c3-8a2d-b2ebad9e89be" containerName="mariadb-account-create-update" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.535006 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8526178-42ba-4e79-bbe7-e52de3593c59" containerName="mariadb-database-create" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.535061 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb221610-a34c-4fd4-9789-9cf67fb330c7" containerName="mariadb-database-create" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.535109 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a" containerName="mariadb-account-create-update" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.535626 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.537925 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kg2c4" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.540388 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.617789 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mgfdp"] Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.649690 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29b4l\" (UniqueName: \"kubernetes.io/projected/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-kube-api-access-29b4l\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.649776 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-db-sync-config-data\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.649986 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-combined-ca-bundle\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.650057 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-config-data\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.751422 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-db-sync-config-data\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.751533 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-combined-ca-bundle\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.751563 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-config-data\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.751590 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29b4l\" (UniqueName: \"kubernetes.io/projected/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-kube-api-access-29b4l\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.757238 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-combined-ca-bundle\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.757448 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-db-sync-config-data\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.759579 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-config-data\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.777340 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29b4l\" (UniqueName: \"kubernetes.io/projected/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-kube-api-access-29b4l\") pod \"glance-db-sync-mgfdp\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:48 crc kubenswrapper[4880]: I1201 03:12:48.851409 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mgfdp" Dec 01 03:12:49 crc kubenswrapper[4880]: I1201 03:12:49.483616 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mgfdp"] Dec 01 03:12:50 crc kubenswrapper[4880]: I1201 03:12:50.318363 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mgfdp" event={"ID":"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77","Type":"ContainerStarted","Data":"1ec4b2197a661860ccd953aeba400f80d46acc8acf0e77b989d350da33fc00ba"} Dec 01 03:12:54 crc kubenswrapper[4880]: I1201 03:12:54.652515 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-m4jtm" podUID="f330d83a-b34f-491b-ad56-07e6bb519191" containerName="ovn-controller" probeResult="failure" output=< Dec 01 03:12:54 crc kubenswrapper[4880]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 01 03:12:54 crc kubenswrapper[4880]: > Dec 01 03:12:54 crc kubenswrapper[4880]: I1201 03:12:54.676471 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:12:58 crc kubenswrapper[4880]: I1201 03:12:58.022222 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:58 crc kubenswrapper[4880]: I1201 03:12:58.028967 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/121483f7-5771-4af8-9777-b980cc1bd4ad-etc-swift\") pod \"swift-storage-0\" (UID: \"121483f7-5771-4af8-9777-b980cc1bd4ad\") " pod="openstack/swift-storage-0" Dec 01 03:12:58 crc kubenswrapper[4880]: I1201 03:12:58.181372 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 01 03:12:59 crc kubenswrapper[4880]: I1201 03:12:59.645797 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-m4jtm" podUID="f330d83a-b34f-491b-ad56-07e6bb519191" containerName="ovn-controller" probeResult="failure" output=< Dec 01 03:12:59 crc kubenswrapper[4880]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 01 03:12:59 crc kubenswrapper[4880]: > Dec 01 03:12:59 crc kubenswrapper[4880]: I1201 03:12:59.667024 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-8c4h4" Dec 01 03:12:59 crc kubenswrapper[4880]: I1201 03:12:59.865938 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-m4jtm-config-vpzfx"] Dec 01 03:12:59 crc kubenswrapper[4880]: I1201 03:12:59.892491 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4jtm-config-vpzfx"] Dec 01 03:12:59 crc kubenswrapper[4880]: I1201 03:12:59.892610 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:12:59 crc kubenswrapper[4880]: I1201 03:12:59.900185 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.055834 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run-ovn\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.055944 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-log-ovn\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.056122 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwc9g\" (UniqueName: \"kubernetes.io/projected/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-kube-api-access-qwc9g\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.056190 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.056207 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-scripts\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.056263 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-additional-scripts\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.157566 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwc9g\" (UniqueName: \"kubernetes.io/projected/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-kube-api-access-qwc9g\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.157619 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.157634 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-scripts\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.157660 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-additional-scripts\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.157694 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run-ovn\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.157731 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-log-ovn\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.158078 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-log-ovn\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.158891 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-additional-scripts\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.158970 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run-ovn\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.159006 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.160134 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-scripts\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.187719 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwc9g\" (UniqueName: \"kubernetes.io/projected/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-kube-api-access-qwc9g\") pod \"ovn-controller-m4jtm-config-vpzfx\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.208680 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.417074 4880 generic.go:334] "Generic (PLEG): container finished" podID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" containerID="5e495af25e395418128c51e910ef72ae6a4966db29958e2b1fc02c70abaf0de2" exitCode=0 Dec 01 03:13:00 crc kubenswrapper[4880]: I1201 03:13:00.417116 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd","Type":"ContainerDied","Data":"5e495af25e395418128c51e910ef72ae6a4966db29958e2b1fc02c70abaf0de2"} Dec 01 03:13:02 crc kubenswrapper[4880]: I1201 03:13:02.434448 4880 generic.go:334] "Generic (PLEG): container finished" podID="d7b466f3-1cab-4282-963d-2cf055d1514f" containerID="f0877f6eb973ad9dbf0a4f3dba4a79a8b3f449d6a63ad7c8fcb84bb8765bd5b7" exitCode=0 Dec 01 03:13:02 crc kubenswrapper[4880]: I1201 03:13:02.434593 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d7b466f3-1cab-4282-963d-2cf055d1514f","Type":"ContainerDied","Data":"f0877f6eb973ad9dbf0a4f3dba4a79a8b3f449d6a63ad7c8fcb84bb8765bd5b7"} Dec 01 03:13:02 crc kubenswrapper[4880]: I1201 03:13:02.440584 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd","Type":"ContainerStarted","Data":"8471f031cfe28e09c92a798118777d4337f58622935c44f175d34afcdbcc9fa2"} Dec 01 03:13:02 crc kubenswrapper[4880]: I1201 03:13:02.441439 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:13:02 crc kubenswrapper[4880]: I1201 03:13:02.484520 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.498625762 podStartE2EDuration="1m15.484501779s" podCreationTimestamp="2025-12-01 03:11:47 +0000 UTC" firstStartedPulling="2025-12-01 03:11:49.546382366 +0000 UTC m=+939.057636738" lastFinishedPulling="2025-12-01 03:12:26.532258383 +0000 UTC m=+976.043512755" observedRunningTime="2025-12-01 03:13:02.482629372 +0000 UTC m=+1011.993883764" watchObservedRunningTime="2025-12-01 03:13:02.484501779 +0000 UTC m=+1011.995756151" Dec 01 03:13:02 crc kubenswrapper[4880]: I1201 03:13:02.618188 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4jtm-config-vpzfx"] Dec 01 03:13:02 crc kubenswrapper[4880]: W1201 03:13:02.627632 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22e3fa53_d58f_46ea_a0ec_e48d2a4bcbaf.slice/crio-a387059b85ec989984ec11a6cc915c4e3059c0bfec88bb2e08ad04b99559d81c WatchSource:0}: Error finding container a387059b85ec989984ec11a6cc915c4e3059c0bfec88bb2e08ad04b99559d81c: Status 404 returned error can't find the container with id a387059b85ec989984ec11a6cc915c4e3059c0bfec88bb2e08ad04b99559d81c Dec 01 03:13:02 crc kubenswrapper[4880]: I1201 03:13:02.671126 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 01 03:13:02 crc kubenswrapper[4880]: W1201 03:13:02.681110 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod121483f7_5771_4af8_9777_b980cc1bd4ad.slice/crio-b894688ac0ffec1a993c49060a6e29ff2b1742cada1b123eb8d845ac6fefbe10 WatchSource:0}: Error finding container b894688ac0ffec1a993c49060a6e29ff2b1742cada1b123eb8d845ac6fefbe10: Status 404 returned error can't find the container with id b894688ac0ffec1a993c49060a6e29ff2b1742cada1b123eb8d845ac6fefbe10 Dec 01 03:13:03 crc kubenswrapper[4880]: I1201 03:13:03.455836 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4jtm-config-vpzfx" event={"ID":"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf","Type":"ContainerStarted","Data":"e7e4b48f5e9544abff424672ecdf08ae9130b7622b111ec3da8e82947db4c622"} Dec 01 03:13:03 crc kubenswrapper[4880]: I1201 03:13:03.456325 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4jtm-config-vpzfx" event={"ID":"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf","Type":"ContainerStarted","Data":"a387059b85ec989984ec11a6cc915c4e3059c0bfec88bb2e08ad04b99559d81c"} Dec 01 03:13:03 crc kubenswrapper[4880]: I1201 03:13:03.457635 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d7b466f3-1cab-4282-963d-2cf055d1514f","Type":"ContainerStarted","Data":"63a1523ecb3393de74ecade1408c7411b563e31b3d9177f58986ad2f3850a22a"} Dec 01 03:13:03 crc kubenswrapper[4880]: I1201 03:13:03.457859 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 01 03:13:03 crc kubenswrapper[4880]: I1201 03:13:03.459798 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mgfdp" event={"ID":"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77","Type":"ContainerStarted","Data":"63c8cc42663434f16eff74be28fa75ae0a8f81f467f21a565445dc37f983329f"} Dec 01 03:13:03 crc kubenswrapper[4880]: I1201 03:13:03.462139 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"b894688ac0ffec1a993c49060a6e29ff2b1742cada1b123eb8d845ac6fefbe10"} Dec 01 03:13:03 crc kubenswrapper[4880]: I1201 03:13:03.503574 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371961.351223 podStartE2EDuration="1m15.503552168s" podCreationTimestamp="2025-12-01 03:11:48 +0000 UTC" firstStartedPulling="2025-12-01 03:11:50.296824121 +0000 UTC m=+939.808078493" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:13:03.495841435 +0000 UTC m=+1013.007095807" watchObservedRunningTime="2025-12-01 03:13:03.503552168 +0000 UTC m=+1013.014806540" Dec 01 03:13:03 crc kubenswrapper[4880]: I1201 03:13:03.518999 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mgfdp" podStartSLOduration=2.843991296 podStartE2EDuration="15.518982736s" podCreationTimestamp="2025-12-01 03:12:48 +0000 UTC" firstStartedPulling="2025-12-01 03:12:49.492374232 +0000 UTC m=+999.003628604" lastFinishedPulling="2025-12-01 03:13:02.167365672 +0000 UTC m=+1011.678620044" observedRunningTime="2025-12-01 03:13:03.516327099 +0000 UTC m=+1013.027581471" watchObservedRunningTime="2025-12-01 03:13:03.518982736 +0000 UTC m=+1013.030237108" Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.476422 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"4385730a262f6df2291bdbbf56950002d325847570bf3f20306e494b68e0e74e"} Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.476713 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"ff6300cff5fab1223beebcedd31d7f85697303c6f6c1916c0da1fc32c32087c4"} Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.476724 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"389cadad93a089c86f02409b759eab199ec032d20e91045f91decb22b5081822"} Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.476732 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"1647fe78b6253c0bc82f5753f2584ea15e9a80dec0ca319f55828a0031f6ab06"} Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.478474 4880 generic.go:334] "Generic (PLEG): container finished" podID="22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" containerID="e7e4b48f5e9544abff424672ecdf08ae9130b7622b111ec3da8e82947db4c622" exitCode=0 Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.478669 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4jtm-config-vpzfx" event={"ID":"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf","Type":"ContainerDied","Data":"e7e4b48f5e9544abff424672ecdf08ae9130b7622b111ec3da8e82947db4c622"} Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.654303 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-m4jtm" Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.880754 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.972310 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwc9g\" (UniqueName: \"kubernetes.io/projected/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-kube-api-access-qwc9g\") pod \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.972404 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-additional-scripts\") pod \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.972427 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-scripts\") pod \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.972445 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run-ovn\") pod \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.972472 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run\") pod \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.972570 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-log-ovn\") pod \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\" (UID: \"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf\") " Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.972920 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" (UID: "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.974147 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" (UID: "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.974181 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" (UID: "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.974199 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run" (OuterVolumeSpecName: "var-run") pod "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" (UID: "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.974384 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-scripts" (OuterVolumeSpecName: "scripts") pod "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" (UID: "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:04 crc kubenswrapper[4880]: I1201 03:13:04.989733 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-kube-api-access-qwc9g" (OuterVolumeSpecName: "kube-api-access-qwc9g") pod "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" (UID: "22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf"). InnerVolumeSpecName "kube-api-access-qwc9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:05 crc kubenswrapper[4880]: I1201 03:13:05.074584 4880 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:05 crc kubenswrapper[4880]: I1201 03:13:05.074615 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwc9g\" (UniqueName: \"kubernetes.io/projected/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-kube-api-access-qwc9g\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:05 crc kubenswrapper[4880]: I1201 03:13:05.074625 4880 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:05 crc kubenswrapper[4880]: I1201 03:13:05.074636 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:05 crc kubenswrapper[4880]: I1201 03:13:05.074645 4880 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:05 crc kubenswrapper[4880]: I1201 03:13:05.074653 4880 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf-var-run\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:05 crc kubenswrapper[4880]: I1201 03:13:05.489060 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4jtm-config-vpzfx" event={"ID":"22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf","Type":"ContainerDied","Data":"a387059b85ec989984ec11a6cc915c4e3059c0bfec88bb2e08ad04b99559d81c"} Dec 01 03:13:05 crc kubenswrapper[4880]: I1201 03:13:05.489095 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a387059b85ec989984ec11a6cc915c4e3059c0bfec88bb2e08ad04b99559d81c" Dec 01 03:13:05 crc kubenswrapper[4880]: I1201 03:13:05.489177 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4jtm-config-vpzfx" Dec 01 03:13:06 crc kubenswrapper[4880]: I1201 03:13:06.003557 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-m4jtm-config-vpzfx"] Dec 01 03:13:06 crc kubenswrapper[4880]: I1201 03:13:06.011625 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-m4jtm-config-vpzfx"] Dec 01 03:13:06 crc kubenswrapper[4880]: I1201 03:13:06.506300 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"27a25236960bd19804fe3fcc1caed58de67a286193482ec2c4ab8791b7316549"} Dec 01 03:13:06 crc kubenswrapper[4880]: I1201 03:13:06.507544 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"aba99943ae3b02c9eaf9e9ca13cfb5fc9ee0353345a18a7f6509b46b6f70e339"} Dec 01 03:13:06 crc kubenswrapper[4880]: I1201 03:13:06.507612 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"05004b89cef51fa4b08dab2b18083eac13b9b12efe0babffef610a4b4a561198"} Dec 01 03:13:06 crc kubenswrapper[4880]: I1201 03:13:06.793747 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" path="/var/lib/kubelet/pods/22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf/volumes" Dec 01 03:13:07 crc kubenswrapper[4880]: I1201 03:13:07.516796 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"f4f159934c4148a42b661d7711cc0a272e110b900f68b70ac48c578e27a16f46"} Dec 01 03:13:08 crc kubenswrapper[4880]: I1201 03:13:08.533573 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"13552008dc0921f47b115c5001173e2d992d5697cae34753a60d471d16efee8e"} Dec 01 03:13:08 crc kubenswrapper[4880]: I1201 03:13:08.533835 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"78f3784230871b663b1087b2e9f0220e8f5fd0778f16270e3f4555589d7d11f6"} Dec 01 03:13:08 crc kubenswrapper[4880]: I1201 03:13:08.533847 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"287feed3541576bd7da9b9e350d21590957ffc12ed456f3fb2ae2d9a9465a68a"} Dec 01 03:13:08 crc kubenswrapper[4880]: I1201 03:13:08.533856 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"e227ae0941b821fcc8b4d5c9f8f56777973806e11cc6f7422831993ee11d9f06"} Dec 01 03:13:08 crc kubenswrapper[4880]: I1201 03:13:08.533864 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"bd0efccd7f8eb6cffb42fc40ed12b3bdef3e1e4468880c33b505b15b7f137621"} Dec 01 03:13:08 crc kubenswrapper[4880]: I1201 03:13:08.533904 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"b105490a27eeb1a1d4076c838e9751b884b6c9fea0806910bdfaea1c8ac12170"} Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.547846 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"121483f7-5771-4af8-9777-b980cc1bd4ad","Type":"ContainerStarted","Data":"dd853f416fecc9f8d3659e7d7de18c9a8122a87dc0d0bab3b87e97179d66a6f8"} Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.630573 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.909644172 podStartE2EDuration="44.63055755s" podCreationTimestamp="2025-12-01 03:12:25 +0000 UTC" firstStartedPulling="2025-12-01 03:13:02.68402608 +0000 UTC m=+1012.195280452" lastFinishedPulling="2025-12-01 03:13:07.404939458 +0000 UTC m=+1016.916193830" observedRunningTime="2025-12-01 03:13:09.617829391 +0000 UTC m=+1019.129083773" watchObservedRunningTime="2025-12-01 03:13:09.63055755 +0000 UTC m=+1019.141811922" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.929301 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59fc5bccc9-pvhg6"] Dec 01 03:13:09 crc kubenswrapper[4880]: E1201 03:13:09.929650 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" containerName="ovn-config" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.929669 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" containerName="ovn-config" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.929900 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e3fa53-d58f-46ea-a0ec-e48d2a4bcbaf" containerName="ovn-config" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.930804 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.937326 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.950429 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-swift-storage-0\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.950470 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-nb\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.950539 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-sb\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.950563 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-svc\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.951103 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-config\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.951193 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ggd\" (UniqueName: \"kubernetes.io/projected/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-kube-api-access-x6ggd\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:09 crc kubenswrapper[4880]: I1201 03:13:09.965586 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fc5bccc9-pvhg6"] Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.052211 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-swift-storage-0\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.052255 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-nb\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.052278 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-sb\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.052300 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-svc\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.052351 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-config\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.052398 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6ggd\" (UniqueName: \"kubernetes.io/projected/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-kube-api-access-x6ggd\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.053288 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-config\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.053346 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-svc\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.053811 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-sb\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.053950 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-nb\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.054547 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-swift-storage-0\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.086290 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6ggd\" (UniqueName: \"kubernetes.io/projected/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-kube-api-access-x6ggd\") pod \"dnsmasq-dns-59fc5bccc9-pvhg6\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.247129 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:10 crc kubenswrapper[4880]: I1201 03:13:10.749355 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59fc5bccc9-pvhg6"] Dec 01 03:13:11 crc kubenswrapper[4880]: I1201 03:13:11.563514 4880 generic.go:334] "Generic (PLEG): container finished" podID="d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" containerID="ba29c4eb9d7df4b4a6d46866bf45083b72ee960342a1ba4fae38d8a02a7eb41d" exitCode=0 Dec 01 03:13:11 crc kubenswrapper[4880]: I1201 03:13:11.563615 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" event={"ID":"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47","Type":"ContainerDied","Data":"ba29c4eb9d7df4b4a6d46866bf45083b72ee960342a1ba4fae38d8a02a7eb41d"} Dec 01 03:13:11 crc kubenswrapper[4880]: I1201 03:13:11.563787 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" event={"ID":"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47","Type":"ContainerStarted","Data":"401c51cbb71cd22154bde249776e1ca6d4070bd5c0805c3b42871abd8dfe4c0e"} Dec 01 03:13:12 crc kubenswrapper[4880]: I1201 03:13:12.573330 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" event={"ID":"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47","Type":"ContainerStarted","Data":"cce511c67834f98cd4a6875f439d1ef3688c78de52a8ad4bbd07a1c29fcae13f"} Dec 01 03:13:12 crc kubenswrapper[4880]: I1201 03:13:12.574358 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:12 crc kubenswrapper[4880]: I1201 03:13:12.639540 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" podStartSLOduration=3.63952402 podStartE2EDuration="3.63952402s" podCreationTimestamp="2025-12-01 03:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:13:12.63518669 +0000 UTC m=+1022.146441062" watchObservedRunningTime="2025-12-01 03:13:12.63952402 +0000 UTC m=+1022.150778392" Dec 01 03:13:13 crc kubenswrapper[4880]: I1201 03:13:13.587017 4880 generic.go:334] "Generic (PLEG): container finished" podID="2a3bf429-3ea4-43b1-a5c3-34c138ba8e77" containerID="63c8cc42663434f16eff74be28fa75ae0a8f81f467f21a565445dc37f983329f" exitCode=0 Dec 01 03:13:13 crc kubenswrapper[4880]: I1201 03:13:13.587231 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mgfdp" event={"ID":"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77","Type":"ContainerDied","Data":"63c8cc42663434f16eff74be28fa75ae0a8f81f467f21a565445dc37f983329f"} Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.030519 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mgfdp" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.176016 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-combined-ca-bundle\") pod \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.176307 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-db-sync-config-data\") pod \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.176355 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29b4l\" (UniqueName: \"kubernetes.io/projected/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-kube-api-access-29b4l\") pod \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.176428 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-config-data\") pod \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\" (UID: \"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77\") " Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.183393 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2a3bf429-3ea4-43b1-a5c3-34c138ba8e77" (UID: "2a3bf429-3ea4-43b1-a5c3-34c138ba8e77"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.185162 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-kube-api-access-29b4l" (OuterVolumeSpecName: "kube-api-access-29b4l") pod "2a3bf429-3ea4-43b1-a5c3-34c138ba8e77" (UID: "2a3bf429-3ea4-43b1-a5c3-34c138ba8e77"). InnerVolumeSpecName "kube-api-access-29b4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.206289 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a3bf429-3ea4-43b1-a5c3-34c138ba8e77" (UID: "2a3bf429-3ea4-43b1-a5c3-34c138ba8e77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.245961 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-config-data" (OuterVolumeSpecName: "config-data") pod "2a3bf429-3ea4-43b1-a5c3-34c138ba8e77" (UID: "2a3bf429-3ea4-43b1-a5c3-34c138ba8e77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.278389 4880 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.278421 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29b4l\" (UniqueName: \"kubernetes.io/projected/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-kube-api-access-29b4l\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.278434 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.278474 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.618338 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mgfdp" event={"ID":"2a3bf429-3ea4-43b1-a5c3-34c138ba8e77","Type":"ContainerDied","Data":"1ec4b2197a661860ccd953aeba400f80d46acc8acf0e77b989d350da33fc00ba"} Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.618669 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ec4b2197a661860ccd953aeba400f80d46acc8acf0e77b989d350da33fc00ba" Dec 01 03:13:15 crc kubenswrapper[4880]: I1201 03:13:15.618816 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mgfdp" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.151313 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fc5bccc9-pvhg6"] Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.151507 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" podUID="d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" containerName="dnsmasq-dns" containerID="cri-o://cce511c67834f98cd4a6875f439d1ef3688c78de52a8ad4bbd07a1c29fcae13f" gracePeriod=10 Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.160044 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.234913 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69456b8679-gnrn4"] Dec 01 03:13:16 crc kubenswrapper[4880]: E1201 03:13:16.235230 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3bf429-3ea4-43b1-a5c3-34c138ba8e77" containerName="glance-db-sync" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.235240 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3bf429-3ea4-43b1-a5c3-34c138ba8e77" containerName="glance-db-sync" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.235411 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3bf429-3ea4-43b1-a5c3-34c138ba8e77" containerName="glance-db-sync" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.246354 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.291313 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69456b8679-gnrn4"] Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.320673 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-svc\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.320790 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kmzw\" (UniqueName: \"kubernetes.io/projected/7950bc21-2f03-4e16-a9e7-2c76a48078df-kube-api-access-4kmzw\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.320818 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-config\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.320860 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-sb\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.320893 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-swift-storage-0\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.320943 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-nb\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.421901 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-nb\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.422189 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-svc\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.422234 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kmzw\" (UniqueName: \"kubernetes.io/projected/7950bc21-2f03-4e16-a9e7-2c76a48078df-kube-api-access-4kmzw\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.422252 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-config\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.422294 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-sb\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.422312 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-swift-storage-0\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.422736 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-nb\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.423035 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-svc\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.423089 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-config\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.423561 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-sb\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.423624 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-swift-storage-0\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.463728 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kmzw\" (UniqueName: \"kubernetes.io/projected/7950bc21-2f03-4e16-a9e7-2c76a48078df-kube-api-access-4kmzw\") pod \"dnsmasq-dns-69456b8679-gnrn4\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.570995 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.638435 4880 generic.go:334] "Generic (PLEG): container finished" podID="d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" containerID="cce511c67834f98cd4a6875f439d1ef3688c78de52a8ad4bbd07a1c29fcae13f" exitCode=0 Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.638479 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" event={"ID":"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47","Type":"ContainerDied","Data":"cce511c67834f98cd4a6875f439d1ef3688c78de52a8ad4bbd07a1c29fcae13f"} Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.854892 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.935400 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-nb\") pod \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.935453 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-swift-storage-0\") pod \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.935479 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-svc\") pod \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.935527 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6ggd\" (UniqueName: \"kubernetes.io/projected/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-kube-api-access-x6ggd\") pod \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.936173 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-sb\") pod \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.936227 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-config\") pod \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\" (UID: \"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47\") " Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.940533 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-kube-api-access-x6ggd" (OuterVolumeSpecName: "kube-api-access-x6ggd") pod "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" (UID: "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47"). InnerVolumeSpecName "kube-api-access-x6ggd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.969969 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" (UID: "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.973924 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-config" (OuterVolumeSpecName: "config") pod "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" (UID: "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.975795 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" (UID: "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.977911 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" (UID: "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:16 crc kubenswrapper[4880]: I1201 03:13:16.980637 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" (UID: "d7ad4bd3-59c5-4a17-a490-3e3bd07eff47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.037954 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.037981 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.037991 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.038001 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6ggd\" (UniqueName: \"kubernetes.io/projected/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-kube-api-access-x6ggd\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.038012 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.038020 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.070657 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69456b8679-gnrn4"] Dec 01 03:13:17 crc kubenswrapper[4880]: W1201 03:13:17.077275 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7950bc21_2f03_4e16_a9e7_2c76a48078df.slice/crio-8c868dead88eb8be052b1e3d87075c798e42ac5dbdc5b0b64b6acb50589ead00 WatchSource:0}: Error finding container 8c868dead88eb8be052b1e3d87075c798e42ac5dbdc5b0b64b6acb50589ead00: Status 404 returned error can't find the container with id 8c868dead88eb8be052b1e3d87075c798e42ac5dbdc5b0b64b6acb50589ead00 Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.650563 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" event={"ID":"d7ad4bd3-59c5-4a17-a490-3e3bd07eff47","Type":"ContainerDied","Data":"401c51cbb71cd22154bde249776e1ca6d4070bd5c0805c3b42871abd8dfe4c0e"} Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.650814 4880 scope.go:117] "RemoveContainer" containerID="cce511c67834f98cd4a6875f439d1ef3688c78de52a8ad4bbd07a1c29fcae13f" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.650633 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59fc5bccc9-pvhg6" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.654505 4880 generic.go:334] "Generic (PLEG): container finished" podID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerID="35d7a3aac11a41c678640d39e8560a7ebfa91494e3f062b9fe2474f349f75820" exitCode=0 Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.654552 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" event={"ID":"7950bc21-2f03-4e16-a9e7-2c76a48078df","Type":"ContainerDied","Data":"35d7a3aac11a41c678640d39e8560a7ebfa91494e3f062b9fe2474f349f75820"} Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.654576 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" event={"ID":"7950bc21-2f03-4e16-a9e7-2c76a48078df","Type":"ContainerStarted","Data":"8c868dead88eb8be052b1e3d87075c798e42ac5dbdc5b0b64b6acb50589ead00"} Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.768014 4880 scope.go:117] "RemoveContainer" containerID="ba29c4eb9d7df4b4a6d46866bf45083b72ee960342a1ba4fae38d8a02a7eb41d" Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.820365 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59fc5bccc9-pvhg6"] Dec 01 03:13:17 crc kubenswrapper[4880]: I1201 03:13:17.827777 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59fc5bccc9-pvhg6"] Dec 01 03:13:18 crc kubenswrapper[4880]: I1201 03:13:18.667970 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" event={"ID":"7950bc21-2f03-4e16-a9e7-2c76a48078df","Type":"ContainerStarted","Data":"5171f871e6d95428c049012148c9a0c79198b7125214c960436f3ea2c2af2217"} Dec 01 03:13:18 crc kubenswrapper[4880]: I1201 03:13:18.668335 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:18 crc kubenswrapper[4880]: I1201 03:13:18.689458 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" podStartSLOduration=2.689440604 podStartE2EDuration="2.689440604s" podCreationTimestamp="2025-12-01 03:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:13:18.684195403 +0000 UTC m=+1028.195449785" watchObservedRunningTime="2025-12-01 03:13:18.689440604 +0000 UTC m=+1028.200694976" Dec 01 03:13:18 crc kubenswrapper[4880]: I1201 03:13:18.795722 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" path="/var/lib/kubelet/pods/d7ad4bd3-59c5-4a17-a490-3e3bd07eff47/volumes" Dec 01 03:13:18 crc kubenswrapper[4880]: I1201 03:13:18.826144 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:13:19 crc kubenswrapper[4880]: I1201 03:13:19.465044 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.151264 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-j7kh9"] Dec 01 03:13:21 crc kubenswrapper[4880]: E1201 03:13:21.151846 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" containerName="dnsmasq-dns" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.151857 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" containerName="dnsmasq-dns" Dec 01 03:13:21 crc kubenswrapper[4880]: E1201 03:13:21.151889 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" containerName="init" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.151895 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" containerName="init" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.152052 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ad4bd3-59c5-4a17-a490-3e3bd07eff47" containerName="dnsmasq-dns" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.152627 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.157205 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-688c-account-create-update-xvv2g"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.158255 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.159604 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.186766 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-688c-account-create-update-xvv2g"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.248268 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-j7kh9"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.278143 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-5ff8k"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.279096 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.297685 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5ff8k"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.311323 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c09d440-d3ba-4284-be20-bd4853fdbd6a-operator-scripts\") pod \"heat-db-create-j7kh9\" (UID: \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\") " pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.311388 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4cr\" (UniqueName: \"kubernetes.io/projected/7554d9ac-da16-4f66-8174-cec776c1cb09-kube-api-access-dk4cr\") pod \"heat-688c-account-create-update-xvv2g\" (UID: \"7554d9ac-da16-4f66-8174-cec776c1cb09\") " pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.311455 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crv4q\" (UniqueName: \"kubernetes.io/projected/8c09d440-d3ba-4284-be20-bd4853fdbd6a-kube-api-access-crv4q\") pod \"heat-db-create-j7kh9\" (UID: \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\") " pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.311481 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7554d9ac-da16-4f66-8174-cec776c1cb09-operator-scripts\") pod \"heat-688c-account-create-update-xvv2g\" (UID: \"7554d9ac-da16-4f66-8174-cec776c1cb09\") " pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.387660 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-s7glh"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.390008 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.413119 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-s7glh"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.413792 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9swc\" (UniqueName: \"kubernetes.io/projected/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-kube-api-access-q9swc\") pod \"cinder-db-create-5ff8k\" (UID: \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\") " pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.413844 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c09d440-d3ba-4284-be20-bd4853fdbd6a-operator-scripts\") pod \"heat-db-create-j7kh9\" (UID: \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\") " pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.413911 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-operator-scripts\") pod \"cinder-db-create-5ff8k\" (UID: \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\") " pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.413943 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4cr\" (UniqueName: \"kubernetes.io/projected/7554d9ac-da16-4f66-8174-cec776c1cb09-kube-api-access-dk4cr\") pod \"heat-688c-account-create-update-xvv2g\" (UID: \"7554d9ac-da16-4f66-8174-cec776c1cb09\") " pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.413981 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crv4q\" (UniqueName: \"kubernetes.io/projected/8c09d440-d3ba-4284-be20-bd4853fdbd6a-kube-api-access-crv4q\") pod \"heat-db-create-j7kh9\" (UID: \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\") " pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.414059 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7554d9ac-da16-4f66-8174-cec776c1cb09-operator-scripts\") pod \"heat-688c-account-create-update-xvv2g\" (UID: \"7554d9ac-da16-4f66-8174-cec776c1cb09\") " pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.414976 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7554d9ac-da16-4f66-8174-cec776c1cb09-operator-scripts\") pod \"heat-688c-account-create-update-xvv2g\" (UID: \"7554d9ac-da16-4f66-8174-cec776c1cb09\") " pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.442238 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c09d440-d3ba-4284-be20-bd4853fdbd6a-operator-scripts\") pod \"heat-db-create-j7kh9\" (UID: \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\") " pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.474991 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crv4q\" (UniqueName: \"kubernetes.io/projected/8c09d440-d3ba-4284-be20-bd4853fdbd6a-kube-api-access-crv4q\") pod \"heat-db-create-j7kh9\" (UID: \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\") " pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.478578 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4cr\" (UniqueName: \"kubernetes.io/projected/7554d9ac-da16-4f66-8174-cec776c1cb09-kube-api-access-dk4cr\") pod \"heat-688c-account-create-update-xvv2g\" (UID: \"7554d9ac-da16-4f66-8174-cec776c1cb09\") " pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.488598 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.509422 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-258b-account-create-update-kscpx"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.510762 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.513360 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.515422 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9swc\" (UniqueName: \"kubernetes.io/projected/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-kube-api-access-q9swc\") pod \"cinder-db-create-5ff8k\" (UID: \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\") " pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.515471 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a939664f-b676-4f49-9e2f-69dc060cd7aa-operator-scripts\") pod \"barbican-db-create-s7glh\" (UID: \"a939664f-b676-4f49-9e2f-69dc060cd7aa\") " pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.515494 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-operator-scripts\") pod \"cinder-db-create-5ff8k\" (UID: \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\") " pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.515543 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v9gs\" (UniqueName: \"kubernetes.io/projected/a939664f-b676-4f49-9e2f-69dc060cd7aa-kube-api-access-6v9gs\") pod \"barbican-db-create-s7glh\" (UID: \"a939664f-b676-4f49-9e2f-69dc060cd7aa\") " pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.516335 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-operator-scripts\") pod \"cinder-db-create-5ff8k\" (UID: \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\") " pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.522166 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-258b-account-create-update-kscpx"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.595301 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-458c-account-create-update-7884x"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.596335 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.599705 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.618236 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9swc\" (UniqueName: \"kubernetes.io/projected/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-kube-api-access-q9swc\") pod \"cinder-db-create-5ff8k\" (UID: \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\") " pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.618600 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v9gs\" (UniqueName: \"kubernetes.io/projected/a939664f-b676-4f49-9e2f-69dc060cd7aa-kube-api-access-6v9gs\") pod \"barbican-db-create-s7glh\" (UID: \"a939664f-b676-4f49-9e2f-69dc060cd7aa\") " pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.618645 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4p5\" (UniqueName: \"kubernetes.io/projected/f183f4ac-adb3-4020-80c4-a06486c2976f-kube-api-access-7x4p5\") pod \"barbican-258b-account-create-update-kscpx\" (UID: \"f183f4ac-adb3-4020-80c4-a06486c2976f\") " pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.618719 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f183f4ac-adb3-4020-80c4-a06486c2976f-operator-scripts\") pod \"barbican-258b-account-create-update-kscpx\" (UID: \"f183f4ac-adb3-4020-80c4-a06486c2976f\") " pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.618775 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a939664f-b676-4f49-9e2f-69dc060cd7aa-operator-scripts\") pod \"barbican-db-create-s7glh\" (UID: \"a939664f-b676-4f49-9e2f-69dc060cd7aa\") " pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.619616 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a939664f-b676-4f49-9e2f-69dc060cd7aa-operator-scripts\") pod \"barbican-db-create-s7glh\" (UID: \"a939664f-b676-4f49-9e2f-69dc060cd7aa\") " pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.635476 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-458c-account-create-update-7884x"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.657048 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ztwtt"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.658017 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.671803 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v9gs\" (UniqueName: \"kubernetes.io/projected/a939664f-b676-4f49-9e2f-69dc060cd7aa-kube-api-access-6v9gs\") pod \"barbican-db-create-s7glh\" (UID: \"a939664f-b676-4f49-9e2f-69dc060cd7aa\") " pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.672793 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.673024 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hsw47" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.673165 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.673198 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.716804 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ztwtt"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.719632 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f183f4ac-adb3-4020-80c4-a06486c2976f-operator-scripts\") pod \"barbican-258b-account-create-update-kscpx\" (UID: \"f183f4ac-adb3-4020-80c4-a06486c2976f\") " pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.719686 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkpqw\" (UniqueName: \"kubernetes.io/projected/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-kube-api-access-jkpqw\") pod \"cinder-458c-account-create-update-7884x\" (UID: \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\") " pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.719933 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4p5\" (UniqueName: \"kubernetes.io/projected/f183f4ac-adb3-4020-80c4-a06486c2976f-kube-api-access-7x4p5\") pod \"barbican-258b-account-create-update-kscpx\" (UID: \"f183f4ac-adb3-4020-80c4-a06486c2976f\") " pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.720171 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f183f4ac-adb3-4020-80c4-a06486c2976f-operator-scripts\") pod \"barbican-258b-account-create-update-kscpx\" (UID: \"f183f4ac-adb3-4020-80c4-a06486c2976f\") " pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.720046 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-operator-scripts\") pod \"cinder-458c-account-create-update-7884x\" (UID: \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\") " pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.724334 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.762976 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d2e-account-create-update-9m5g5"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.764344 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.777262 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.785819 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.798358 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-d7qqc"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.799378 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.810298 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4p5\" (UniqueName: \"kubernetes.io/projected/f183f4ac-adb3-4020-80c4-a06486c2976f-kube-api-access-7x4p5\") pod \"barbican-258b-account-create-update-kscpx\" (UID: \"f183f4ac-adb3-4020-80c4-a06486c2976f\") " pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.821948 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d2e-account-create-update-9m5g5"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.837669 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnjqz\" (UniqueName: \"kubernetes.io/projected/69a60673-7e16-4057-8c8b-1c0b81de2a32-kube-api-access-rnjqz\") pod \"keystone-db-sync-ztwtt\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.837709 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-combined-ca-bundle\") pod \"keystone-db-sync-ztwtt\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.837764 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-operator-scripts\") pod \"cinder-458c-account-create-update-7884x\" (UID: \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\") " pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.837822 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-config-data\") pod \"keystone-db-sync-ztwtt\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.837842 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkpqw\" (UniqueName: \"kubernetes.io/projected/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-kube-api-access-jkpqw\") pod \"cinder-458c-account-create-update-7884x\" (UID: \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\") " pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.838725 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-operator-scripts\") pod \"cinder-458c-account-create-update-7884x\" (UID: \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\") " pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.853564 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-d7qqc"] Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.865182 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkpqw\" (UniqueName: \"kubernetes.io/projected/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-kube-api-access-jkpqw\") pod \"cinder-458c-account-create-update-7884x\" (UID: \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\") " pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.903074 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.949642 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60589a93-d998-4636-9220-053ff2f8384c-operator-scripts\") pod \"neutron-db-create-d7qqc\" (UID: \"60589a93-d998-4636-9220-053ff2f8384c\") " pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.949706 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnjqz\" (UniqueName: \"kubernetes.io/projected/69a60673-7e16-4057-8c8b-1c0b81de2a32-kube-api-access-rnjqz\") pod \"keystone-db-sync-ztwtt\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.949728 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-combined-ca-bundle\") pod \"keystone-db-sync-ztwtt\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.949802 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjgs4\" (UniqueName: \"kubernetes.io/projected/60589a93-d998-4636-9220-053ff2f8384c-kube-api-access-fjgs4\") pod \"neutron-db-create-d7qqc\" (UID: \"60589a93-d998-4636-9220-053ff2f8384c\") " pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.949853 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55f1968-78c3-4b8d-9fa9-2b2807665167-operator-scripts\") pod \"neutron-5d2e-account-create-update-9m5g5\" (UID: \"d55f1968-78c3-4b8d-9fa9-2b2807665167\") " pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.949930 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfkpt\" (UniqueName: \"kubernetes.io/projected/d55f1968-78c3-4b8d-9fa9-2b2807665167-kube-api-access-dfkpt\") pod \"neutron-5d2e-account-create-update-9m5g5\" (UID: \"d55f1968-78c3-4b8d-9fa9-2b2807665167\") " pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.949972 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-config-data\") pod \"keystone-db-sync-ztwtt\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.962584 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-combined-ca-bundle\") pod \"keystone-db-sync-ztwtt\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.974125 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-config-data\") pod \"keystone-db-sync-ztwtt\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:21 crc kubenswrapper[4880]: I1201 03:13:21.974515 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.009671 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnjqz\" (UniqueName: \"kubernetes.io/projected/69a60673-7e16-4057-8c8b-1c0b81de2a32-kube-api-access-rnjqz\") pod \"keystone-db-sync-ztwtt\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.010232 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.039347 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.052673 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjgs4\" (UniqueName: \"kubernetes.io/projected/60589a93-d998-4636-9220-053ff2f8384c-kube-api-access-fjgs4\") pod \"neutron-db-create-d7qqc\" (UID: \"60589a93-d998-4636-9220-053ff2f8384c\") " pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.052731 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55f1968-78c3-4b8d-9fa9-2b2807665167-operator-scripts\") pod \"neutron-5d2e-account-create-update-9m5g5\" (UID: \"d55f1968-78c3-4b8d-9fa9-2b2807665167\") " pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.052773 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfkpt\" (UniqueName: \"kubernetes.io/projected/d55f1968-78c3-4b8d-9fa9-2b2807665167-kube-api-access-dfkpt\") pod \"neutron-5d2e-account-create-update-9m5g5\" (UID: \"d55f1968-78c3-4b8d-9fa9-2b2807665167\") " pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.052842 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60589a93-d998-4636-9220-053ff2f8384c-operator-scripts\") pod \"neutron-db-create-d7qqc\" (UID: \"60589a93-d998-4636-9220-053ff2f8384c\") " pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.053452 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60589a93-d998-4636-9220-053ff2f8384c-operator-scripts\") pod \"neutron-db-create-d7qqc\" (UID: \"60589a93-d998-4636-9220-053ff2f8384c\") " pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.054123 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55f1968-78c3-4b8d-9fa9-2b2807665167-operator-scripts\") pod \"neutron-5d2e-account-create-update-9m5g5\" (UID: \"d55f1968-78c3-4b8d-9fa9-2b2807665167\") " pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.085756 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfkpt\" (UniqueName: \"kubernetes.io/projected/d55f1968-78c3-4b8d-9fa9-2b2807665167-kube-api-access-dfkpt\") pod \"neutron-5d2e-account-create-update-9m5g5\" (UID: \"d55f1968-78c3-4b8d-9fa9-2b2807665167\") " pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.088435 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjgs4\" (UniqueName: \"kubernetes.io/projected/60589a93-d998-4636-9220-053ff2f8384c-kube-api-access-fjgs4\") pod \"neutron-db-create-d7qqc\" (UID: \"60589a93-d998-4636-9220-053ff2f8384c\") " pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.113736 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.161060 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.245829 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-688c-account-create-update-xvv2g"] Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.671947 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-j7kh9"] Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.689299 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-s7glh"] Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.731371 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-688c-account-create-update-xvv2g" event={"ID":"7554d9ac-da16-4f66-8174-cec776c1cb09","Type":"ContainerStarted","Data":"50ff3488fd76b4433f1f968670a7a72eee906072ca7b5e19d0beb63ce786a8ff"} Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.734861 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s7glh" event={"ID":"a939664f-b676-4f49-9e2f-69dc060cd7aa","Type":"ContainerStarted","Data":"1f119a88b2460bced6bbfb14aab3599b547e3e214939c743727f9db9be604a93"} Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.738495 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-j7kh9" event={"ID":"8c09d440-d3ba-4284-be20-bd4853fdbd6a","Type":"ContainerStarted","Data":"7ba4ce8f38a33a1cab78993c53eb9f2d16147cc3266843b4167e661d55984d3c"} Dec 01 03:13:22 crc kubenswrapper[4880]: I1201 03:13:22.997984 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-5ff8k"] Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.021172 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-258b-account-create-update-kscpx"] Dec 01 03:13:23 crc kubenswrapper[4880]: W1201 03:13:23.023785 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa095e4c_2b8c_4a33_b1a1_27c3c9d7cff1.slice/crio-b7d66efb0750028f55d22f651c1ccf11ef7a8ac3983ee2896b85636a47caa797 WatchSource:0}: Error finding container b7d66efb0750028f55d22f651c1ccf11ef7a8ac3983ee2896b85636a47caa797: Status 404 returned error can't find the container with id b7d66efb0750028f55d22f651c1ccf11ef7a8ac3983ee2896b85636a47caa797 Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.077095 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ztwtt"] Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.441587 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-458c-account-create-update-7884x"] Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.449636 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d2e-account-create-update-9m5g5"] Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.455751 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-d7qqc"] Dec 01 03:13:23 crc kubenswrapper[4880]: W1201 03:13:23.459283 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2943e7b1_9d51_4f2e_b02f_f6725dd63c74.slice/crio-45dd89c3ba45afd44f8e72ba59bb36a7116d20326be521d41b838313e1472d3a WatchSource:0}: Error finding container 45dd89c3ba45afd44f8e72ba59bb36a7116d20326be521d41b838313e1472d3a: Status 404 returned error can't find the container with id 45dd89c3ba45afd44f8e72ba59bb36a7116d20326be521d41b838313e1472d3a Dec 01 03:13:23 crc kubenswrapper[4880]: W1201 03:13:23.464962 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd55f1968_78c3_4b8d_9fa9_2b2807665167.slice/crio-f5aa1e00140b4dbb7ca288f2cbe8e058c7855f00b091df9856056e9c7a5352a9 WatchSource:0}: Error finding container f5aa1e00140b4dbb7ca288f2cbe8e058c7855f00b091df9856056e9c7a5352a9: Status 404 returned error can't find the container with id f5aa1e00140b4dbb7ca288f2cbe8e058c7855f00b091df9856056e9c7a5352a9 Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.747993 4880 generic.go:334] "Generic (PLEG): container finished" podID="8c09d440-d3ba-4284-be20-bd4853fdbd6a" containerID="9a9d4a3a67a5baa7a24d90ed41625af2b58838850fead4c4e170127d086c498f" exitCode=0 Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.748042 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-j7kh9" event={"ID":"8c09d440-d3ba-4284-be20-bd4853fdbd6a","Type":"ContainerDied","Data":"9a9d4a3a67a5baa7a24d90ed41625af2b58838850fead4c4e170127d086c498f"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.749087 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ztwtt" event={"ID":"69a60673-7e16-4057-8c8b-1c0b81de2a32","Type":"ContainerStarted","Data":"2177f446e51196c3c53a8e0dfb9c1f2b79f413bccd0545bce54d4fd7457631dc"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.750271 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d2e-account-create-update-9m5g5" event={"ID":"d55f1968-78c3-4b8d-9fa9-2b2807665167","Type":"ContainerStarted","Data":"f5aa1e00140b4dbb7ca288f2cbe8e058c7855f00b091df9856056e9c7a5352a9"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.751726 4880 generic.go:334] "Generic (PLEG): container finished" podID="f183f4ac-adb3-4020-80c4-a06486c2976f" containerID="d2534b9c8eca3c8f76ea66f02e8308f9b5838c333f2f1d8b570eda2dad698e74" exitCode=0 Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.752311 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-258b-account-create-update-kscpx" event={"ID":"f183f4ac-adb3-4020-80c4-a06486c2976f","Type":"ContainerDied","Data":"d2534b9c8eca3c8f76ea66f02e8308f9b5838c333f2f1d8b570eda2dad698e74"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.752340 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-258b-account-create-update-kscpx" event={"ID":"f183f4ac-adb3-4020-80c4-a06486c2976f","Type":"ContainerStarted","Data":"eed0288788e4d420c88f64f3f827d268c09cc878864be9ec63a71967c9c88b6f"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.753313 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-458c-account-create-update-7884x" event={"ID":"2943e7b1-9d51-4f2e-b02f-f6725dd63c74","Type":"ContainerStarted","Data":"45dd89c3ba45afd44f8e72ba59bb36a7116d20326be521d41b838313e1472d3a"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.754632 4880 generic.go:334] "Generic (PLEG): container finished" podID="7554d9ac-da16-4f66-8174-cec776c1cb09" containerID="6f72c1a36c8c0208c0c105043a1c5c07a6cd0a3d190728e948586fb34efca663" exitCode=0 Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.754671 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-688c-account-create-update-xvv2g" event={"ID":"7554d9ac-da16-4f66-8174-cec776c1cb09","Type":"ContainerDied","Data":"6f72c1a36c8c0208c0c105043a1c5c07a6cd0a3d190728e948586fb34efca663"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.755945 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d7qqc" event={"ID":"60589a93-d998-4636-9220-053ff2f8384c","Type":"ContainerStarted","Data":"78fb0a28793b4ed21d2bd9dea6c037515d86bb41b71c8f6cbe867ae516a15d55"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.760512 4880 generic.go:334] "Generic (PLEG): container finished" podID="a939664f-b676-4f49-9e2f-69dc060cd7aa" containerID="bedff099db1d04e247cfea937e8fd7e5185ef5babd7a9aa274938f11f68a3d8d" exitCode=0 Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.760611 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s7glh" event={"ID":"a939664f-b676-4f49-9e2f-69dc060cd7aa","Type":"ContainerDied","Data":"bedff099db1d04e247cfea937e8fd7e5185ef5babd7a9aa274938f11f68a3d8d"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.764934 4880 generic.go:334] "Generic (PLEG): container finished" podID="aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1" containerID="309254ecd3c9a4cf48a3b65a92ba0706917850f2fdf3c31bec1c3538b99ed176" exitCode=0 Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.764982 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5ff8k" event={"ID":"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1","Type":"ContainerDied","Data":"309254ecd3c9a4cf48a3b65a92ba0706917850f2fdf3c31bec1c3538b99ed176"} Dec 01 03:13:23 crc kubenswrapper[4880]: I1201 03:13:23.765006 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5ff8k" event={"ID":"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1","Type":"ContainerStarted","Data":"b7d66efb0750028f55d22f651c1ccf11ef7a8ac3983ee2896b85636a47caa797"} Dec 01 03:13:24 crc kubenswrapper[4880]: I1201 03:13:24.773735 4880 generic.go:334] "Generic (PLEG): container finished" podID="2943e7b1-9d51-4f2e-b02f-f6725dd63c74" containerID="aad484185ff39742febe9c063a22074e4ccb7375c48ffe360be7dd584867dbaa" exitCode=0 Dec 01 03:13:24 crc kubenswrapper[4880]: I1201 03:13:24.773912 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-458c-account-create-update-7884x" event={"ID":"2943e7b1-9d51-4f2e-b02f-f6725dd63c74","Type":"ContainerDied","Data":"aad484185ff39742febe9c063a22074e4ccb7375c48ffe360be7dd584867dbaa"} Dec 01 03:13:24 crc kubenswrapper[4880]: I1201 03:13:24.777019 4880 generic.go:334] "Generic (PLEG): container finished" podID="60589a93-d998-4636-9220-053ff2f8384c" containerID="5393a7f82e41055e897f6d4d3005a21b45b947393927468ecef374f98ee37a40" exitCode=0 Dec 01 03:13:24 crc kubenswrapper[4880]: I1201 03:13:24.777067 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d7qqc" event={"ID":"60589a93-d998-4636-9220-053ff2f8384c","Type":"ContainerDied","Data":"5393a7f82e41055e897f6d4d3005a21b45b947393927468ecef374f98ee37a40"} Dec 01 03:13:24 crc kubenswrapper[4880]: I1201 03:13:24.782171 4880 generic.go:334] "Generic (PLEG): container finished" podID="d55f1968-78c3-4b8d-9fa9-2b2807665167" containerID="fb379bdd3a442c305680db2578fa26329d41018e30eb1a205f849f08efa76745" exitCode=0 Dec 01 03:13:24 crc kubenswrapper[4880]: I1201 03:13:24.782212 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d2e-account-create-update-9m5g5" event={"ID":"d55f1968-78c3-4b8d-9fa9-2b2807665167","Type":"ContainerDied","Data":"fb379bdd3a442c305680db2578fa26329d41018e30eb1a205f849f08efa76745"} Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.181391 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.326597 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a939664f-b676-4f49-9e2f-69dc060cd7aa-operator-scripts\") pod \"a939664f-b676-4f49-9e2f-69dc060cd7aa\" (UID: \"a939664f-b676-4f49-9e2f-69dc060cd7aa\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.326972 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v9gs\" (UniqueName: \"kubernetes.io/projected/a939664f-b676-4f49-9e2f-69dc060cd7aa-kube-api-access-6v9gs\") pod \"a939664f-b676-4f49-9e2f-69dc060cd7aa\" (UID: \"a939664f-b676-4f49-9e2f-69dc060cd7aa\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.329486 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a939664f-b676-4f49-9e2f-69dc060cd7aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a939664f-b676-4f49-9e2f-69dc060cd7aa" (UID: "a939664f-b676-4f49-9e2f-69dc060cd7aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.341679 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a939664f-b676-4f49-9e2f-69dc060cd7aa-kube-api-access-6v9gs" (OuterVolumeSpecName: "kube-api-access-6v9gs") pod "a939664f-b676-4f49-9e2f-69dc060cd7aa" (UID: "a939664f-b676-4f49-9e2f-69dc060cd7aa"). InnerVolumeSpecName "kube-api-access-6v9gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.416117 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.420146 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.429513 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a939664f-b676-4f49-9e2f-69dc060cd7aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.429536 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v9gs\" (UniqueName: \"kubernetes.io/projected/a939664f-b676-4f49-9e2f-69dc060cd7aa-kube-api-access-6v9gs\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.435648 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.457767 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.533413 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk4cr\" (UniqueName: \"kubernetes.io/projected/7554d9ac-da16-4f66-8174-cec776c1cb09-kube-api-access-dk4cr\") pod \"7554d9ac-da16-4f66-8174-cec776c1cb09\" (UID: \"7554d9ac-da16-4f66-8174-cec776c1cb09\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.533495 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x4p5\" (UniqueName: \"kubernetes.io/projected/f183f4ac-adb3-4020-80c4-a06486c2976f-kube-api-access-7x4p5\") pod \"f183f4ac-adb3-4020-80c4-a06486c2976f\" (UID: \"f183f4ac-adb3-4020-80c4-a06486c2976f\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.533590 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f183f4ac-adb3-4020-80c4-a06486c2976f-operator-scripts\") pod \"f183f4ac-adb3-4020-80c4-a06486c2976f\" (UID: \"f183f4ac-adb3-4020-80c4-a06486c2976f\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.533657 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7554d9ac-da16-4f66-8174-cec776c1cb09-operator-scripts\") pod \"7554d9ac-da16-4f66-8174-cec776c1cb09\" (UID: \"7554d9ac-da16-4f66-8174-cec776c1cb09\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.534387 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7554d9ac-da16-4f66-8174-cec776c1cb09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7554d9ac-da16-4f66-8174-cec776c1cb09" (UID: "7554d9ac-da16-4f66-8174-cec776c1cb09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.534688 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f183f4ac-adb3-4020-80c4-a06486c2976f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f183f4ac-adb3-4020-80c4-a06486c2976f" (UID: "f183f4ac-adb3-4020-80c4-a06486c2976f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.536578 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f183f4ac-adb3-4020-80c4-a06486c2976f-kube-api-access-7x4p5" (OuterVolumeSpecName: "kube-api-access-7x4p5") pod "f183f4ac-adb3-4020-80c4-a06486c2976f" (UID: "f183f4ac-adb3-4020-80c4-a06486c2976f"). InnerVolumeSpecName "kube-api-access-7x4p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.537442 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7554d9ac-da16-4f66-8174-cec776c1cb09-kube-api-access-dk4cr" (OuterVolumeSpecName: "kube-api-access-dk4cr") pod "7554d9ac-da16-4f66-8174-cec776c1cb09" (UID: "7554d9ac-da16-4f66-8174-cec776c1cb09"). InnerVolumeSpecName "kube-api-access-dk4cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.634562 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c09d440-d3ba-4284-be20-bd4853fdbd6a-operator-scripts\") pod \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\" (UID: \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.634806 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9swc\" (UniqueName: \"kubernetes.io/projected/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-kube-api-access-q9swc\") pod \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\" (UID: \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.634880 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-operator-scripts\") pod \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\" (UID: \"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.634903 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crv4q\" (UniqueName: \"kubernetes.io/projected/8c09d440-d3ba-4284-be20-bd4853fdbd6a-kube-api-access-crv4q\") pod \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\" (UID: \"8c09d440-d3ba-4284-be20-bd4853fdbd6a\") " Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.635190 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7554d9ac-da16-4f66-8174-cec776c1cb09-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.635206 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk4cr\" (UniqueName: \"kubernetes.io/projected/7554d9ac-da16-4f66-8174-cec776c1cb09-kube-api-access-dk4cr\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.635218 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x4p5\" (UniqueName: \"kubernetes.io/projected/f183f4ac-adb3-4020-80c4-a06486c2976f-kube-api-access-7x4p5\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.635226 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f183f4ac-adb3-4020-80c4-a06486c2976f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.636033 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1" (UID: "aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.636058 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c09d440-d3ba-4284-be20-bd4853fdbd6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c09d440-d3ba-4284-be20-bd4853fdbd6a" (UID: "8c09d440-d3ba-4284-be20-bd4853fdbd6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.638093 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c09d440-d3ba-4284-be20-bd4853fdbd6a-kube-api-access-crv4q" (OuterVolumeSpecName: "kube-api-access-crv4q") pod "8c09d440-d3ba-4284-be20-bd4853fdbd6a" (UID: "8c09d440-d3ba-4284-be20-bd4853fdbd6a"). InnerVolumeSpecName "kube-api-access-crv4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.638127 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-kube-api-access-q9swc" (OuterVolumeSpecName: "kube-api-access-q9swc") pod "aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1" (UID: "aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1"). InnerVolumeSpecName "kube-api-access-q9swc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.736833 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9swc\" (UniqueName: \"kubernetes.io/projected/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-kube-api-access-q9swc\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.736929 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.736950 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crv4q\" (UniqueName: \"kubernetes.io/projected/8c09d440-d3ba-4284-be20-bd4853fdbd6a-kube-api-access-crv4q\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.736969 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c09d440-d3ba-4284-be20-bd4853fdbd6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.793225 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s7glh" event={"ID":"a939664f-b676-4f49-9e2f-69dc060cd7aa","Type":"ContainerDied","Data":"1f119a88b2460bced6bbfb14aab3599b547e3e214939c743727f9db9be604a93"} Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.793273 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f119a88b2460bced6bbfb14aab3599b547e3e214939c743727f9db9be604a93" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.793332 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s7glh" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.797426 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-258b-account-create-update-kscpx" event={"ID":"f183f4ac-adb3-4020-80c4-a06486c2976f","Type":"ContainerDied","Data":"eed0288788e4d420c88f64f3f827d268c09cc878864be9ec63a71967c9c88b6f"} Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.797466 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed0288788e4d420c88f64f3f827d268c09cc878864be9ec63a71967c9c88b6f" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.797527 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-258b-account-create-update-kscpx" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.804125 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-5ff8k" event={"ID":"aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1","Type":"ContainerDied","Data":"b7d66efb0750028f55d22f651c1ccf11ef7a8ac3983ee2896b85636a47caa797"} Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.804227 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d66efb0750028f55d22f651c1ccf11ef7a8ac3983ee2896b85636a47caa797" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.804282 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-5ff8k" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.813489 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-j7kh9" event={"ID":"8c09d440-d3ba-4284-be20-bd4853fdbd6a","Type":"ContainerDied","Data":"7ba4ce8f38a33a1cab78993c53eb9f2d16147cc3266843b4167e661d55984d3c"} Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.813538 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba4ce8f38a33a1cab78993c53eb9f2d16147cc3266843b4167e661d55984d3c" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.813614 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-j7kh9" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.829315 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-688c-account-create-update-xvv2g" event={"ID":"7554d9ac-da16-4f66-8174-cec776c1cb09","Type":"ContainerDied","Data":"50ff3488fd76b4433f1f968670a7a72eee906072ca7b5e19d0beb63ce786a8ff"} Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.829372 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50ff3488fd76b4433f1f968670a7a72eee906072ca7b5e19d0beb63ce786a8ff" Dec 01 03:13:25 crc kubenswrapper[4880]: I1201 03:13:25.829483 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-688c-account-create-update-xvv2g" Dec 01 03:13:26 crc kubenswrapper[4880]: I1201 03:13:26.573079 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:13:26 crc kubenswrapper[4880]: I1201 03:13:26.633319 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685cfc6bfc-2mb9m"] Dec 01 03:13:26 crc kubenswrapper[4880]: I1201 03:13:26.633546 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" podUID="d24dae40-463d-4451-a741-bca4504d68e8" containerName="dnsmasq-dns" containerID="cri-o://b2a8866b63ee4e18b22aae7dcca6794436a6500d0d12df7e8e7c3b6544cd5cd8" gracePeriod=10 Dec 01 03:13:26 crc kubenswrapper[4880]: I1201 03:13:26.840936 4880 generic.go:334] "Generic (PLEG): container finished" podID="d24dae40-463d-4451-a741-bca4504d68e8" containerID="b2a8866b63ee4e18b22aae7dcca6794436a6500d0d12df7e8e7c3b6544cd5cd8" exitCode=0 Dec 01 03:13:26 crc kubenswrapper[4880]: I1201 03:13:26.840975 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" event={"ID":"d24dae40-463d-4451-a741-bca4504d68e8","Type":"ContainerDied","Data":"b2a8866b63ee4e18b22aae7dcca6794436a6500d0d12df7e8e7c3b6544cd5cd8"} Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.417832 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.454463 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.469943 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.512272 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.544441 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfkpt\" (UniqueName: \"kubernetes.io/projected/d55f1968-78c3-4b8d-9fa9-2b2807665167-kube-api-access-dfkpt\") pod \"d55f1968-78c3-4b8d-9fa9-2b2807665167\" (UID: \"d55f1968-78c3-4b8d-9fa9-2b2807665167\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.544624 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-dns-svc\") pod \"d24dae40-463d-4451-a741-bca4504d68e8\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.544672 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjgs4\" (UniqueName: \"kubernetes.io/projected/60589a93-d998-4636-9220-053ff2f8384c-kube-api-access-fjgs4\") pod \"60589a93-d998-4636-9220-053ff2f8384c\" (UID: \"60589a93-d998-4636-9220-053ff2f8384c\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.544698 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-nb\") pod \"d24dae40-463d-4451-a741-bca4504d68e8\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.544727 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6klc\" (UniqueName: \"kubernetes.io/projected/d24dae40-463d-4451-a741-bca4504d68e8-kube-api-access-s6klc\") pod \"d24dae40-463d-4451-a741-bca4504d68e8\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.544762 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60589a93-d998-4636-9220-053ff2f8384c-operator-scripts\") pod \"60589a93-d998-4636-9220-053ff2f8384c\" (UID: \"60589a93-d998-4636-9220-053ff2f8384c\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.544778 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-sb\") pod \"d24dae40-463d-4451-a741-bca4504d68e8\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.544805 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-operator-scripts\") pod \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\" (UID: \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.544846 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55f1968-78c3-4b8d-9fa9-2b2807665167-operator-scripts\") pod \"d55f1968-78c3-4b8d-9fa9-2b2807665167\" (UID: \"d55f1968-78c3-4b8d-9fa9-2b2807665167\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.545839 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2943e7b1-9d51-4f2e-b02f-f6725dd63c74" (UID: "2943e7b1-9d51-4f2e-b02f-f6725dd63c74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.545932 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d55f1968-78c3-4b8d-9fa9-2b2807665167-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d55f1968-78c3-4b8d-9fa9-2b2807665167" (UID: "d55f1968-78c3-4b8d-9fa9-2b2807665167"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.546817 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60589a93-d998-4636-9220-053ff2f8384c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60589a93-d998-4636-9220-053ff2f8384c" (UID: "60589a93-d998-4636-9220-053ff2f8384c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.550218 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60589a93-d998-4636-9220-053ff2f8384c-kube-api-access-fjgs4" (OuterVolumeSpecName: "kube-api-access-fjgs4") pod "60589a93-d998-4636-9220-053ff2f8384c" (UID: "60589a93-d998-4636-9220-053ff2f8384c"). InnerVolumeSpecName "kube-api-access-fjgs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.552911 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24dae40-463d-4451-a741-bca4504d68e8-kube-api-access-s6klc" (OuterVolumeSpecName: "kube-api-access-s6klc") pod "d24dae40-463d-4451-a741-bca4504d68e8" (UID: "d24dae40-463d-4451-a741-bca4504d68e8"). InnerVolumeSpecName "kube-api-access-s6klc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.553015 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d55f1968-78c3-4b8d-9fa9-2b2807665167-kube-api-access-dfkpt" (OuterVolumeSpecName: "kube-api-access-dfkpt") pod "d55f1968-78c3-4b8d-9fa9-2b2807665167" (UID: "d55f1968-78c3-4b8d-9fa9-2b2807665167"). InnerVolumeSpecName "kube-api-access-dfkpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.588625 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d24dae40-463d-4451-a741-bca4504d68e8" (UID: "d24dae40-463d-4451-a741-bca4504d68e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.590879 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d24dae40-463d-4451-a741-bca4504d68e8" (UID: "d24dae40-463d-4451-a741-bca4504d68e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.602347 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d24dae40-463d-4451-a741-bca4504d68e8" (UID: "d24dae40-463d-4451-a741-bca4504d68e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646142 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkpqw\" (UniqueName: \"kubernetes.io/projected/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-kube-api-access-jkpqw\") pod \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\" (UID: \"2943e7b1-9d51-4f2e-b02f-f6725dd63c74\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646183 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-config\") pod \"d24dae40-463d-4451-a741-bca4504d68e8\" (UID: \"d24dae40-463d-4451-a741-bca4504d68e8\") " Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646417 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d55f1968-78c3-4b8d-9fa9-2b2807665167-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646428 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfkpt\" (UniqueName: \"kubernetes.io/projected/d55f1968-78c3-4b8d-9fa9-2b2807665167-kube-api-access-dfkpt\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646438 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646448 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjgs4\" (UniqueName: \"kubernetes.io/projected/60589a93-d998-4636-9220-053ff2f8384c-kube-api-access-fjgs4\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646475 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646484 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6klc\" (UniqueName: \"kubernetes.io/projected/d24dae40-463d-4451-a741-bca4504d68e8-kube-api-access-s6klc\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646492 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60589a93-d998-4636-9220-053ff2f8384c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646499 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.646508 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.649347 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-kube-api-access-jkpqw" (OuterVolumeSpecName: "kube-api-access-jkpqw") pod "2943e7b1-9d51-4f2e-b02f-f6725dd63c74" (UID: "2943e7b1-9d51-4f2e-b02f-f6725dd63c74"). InnerVolumeSpecName "kube-api-access-jkpqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.681620 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-config" (OuterVolumeSpecName: "config") pod "d24dae40-463d-4451-a741-bca4504d68e8" (UID: "d24dae40-463d-4451-a741-bca4504d68e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.747576 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkpqw\" (UniqueName: \"kubernetes.io/projected/2943e7b1-9d51-4f2e-b02f-f6725dd63c74-kube-api-access-jkpqw\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.747627 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d24dae40-463d-4451-a741-bca4504d68e8-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.878539 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-458c-account-create-update-7884x" event={"ID":"2943e7b1-9d51-4f2e-b02f-f6725dd63c74","Type":"ContainerDied","Data":"45dd89c3ba45afd44f8e72ba59bb36a7116d20326be521d41b838313e1472d3a"} Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.878583 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45dd89c3ba45afd44f8e72ba59bb36a7116d20326be521d41b838313e1472d3a" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.878650 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-458c-account-create-update-7884x" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.882515 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d7qqc" event={"ID":"60589a93-d998-4636-9220-053ff2f8384c","Type":"ContainerDied","Data":"78fb0a28793b4ed21d2bd9dea6c037515d86bb41b71c8f6cbe867ae516a15d55"} Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.882552 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78fb0a28793b4ed21d2bd9dea6c037515d86bb41b71c8f6cbe867ae516a15d55" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.882613 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d7qqc" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.885138 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d2e-account-create-update-9m5g5" event={"ID":"d55f1968-78c3-4b8d-9fa9-2b2807665167","Type":"ContainerDied","Data":"f5aa1e00140b4dbb7ca288f2cbe8e058c7855f00b091df9856056e9c7a5352a9"} Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.885158 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d2e-account-create-update-9m5g5" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.887390 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5aa1e00140b4dbb7ca288f2cbe8e058c7855f00b091df9856056e9c7a5352a9" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.897741 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ztwtt" event={"ID":"69a60673-7e16-4057-8c8b-1c0b81de2a32","Type":"ContainerStarted","Data":"0c0fb2cac9fc5c19b21b0512ca3ba60aaae41ca8e5c2f0942eecbfdfe896212e"} Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.902127 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" event={"ID":"d24dae40-463d-4451-a741-bca4504d68e8","Type":"ContainerDied","Data":"cb9ed498a03bd03996cb8433f573aa7b39e31b97ff88b87a38ce68f076fa10d6"} Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.902166 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685cfc6bfc-2mb9m" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.902196 4880 scope.go:117] "RemoveContainer" containerID="b2a8866b63ee4e18b22aae7dcca6794436a6500d0d12df7e8e7c3b6544cd5cd8" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.931891 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ztwtt" podStartSLOduration=2.815972913 podStartE2EDuration="8.931832334s" podCreationTimestamp="2025-12-01 03:13:21 +0000 UTC" firstStartedPulling="2025-12-01 03:13:23.12007752 +0000 UTC m=+1032.631331892" lastFinishedPulling="2025-12-01 03:13:29.235936931 +0000 UTC m=+1038.747191313" observedRunningTime="2025-12-01 03:13:29.924095279 +0000 UTC m=+1039.435349651" watchObservedRunningTime="2025-12-01 03:13:29.931832334 +0000 UTC m=+1039.443086706" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.938622 4880 scope.go:117] "RemoveContainer" containerID="0da4badd63cb2fb5ba60ff5d1ce772a6d082b2e56ddef4e4b96e37a940fb3386" Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.951860 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685cfc6bfc-2mb9m"] Dec 01 03:13:29 crc kubenswrapper[4880]: I1201 03:13:29.959430 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685cfc6bfc-2mb9m"] Dec 01 03:13:30 crc kubenswrapper[4880]: I1201 03:13:30.809703 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d24dae40-463d-4451-a741-bca4504d68e8" path="/var/lib/kubelet/pods/d24dae40-463d-4451-a741-bca4504d68e8/volumes" Dec 01 03:13:32 crc kubenswrapper[4880]: I1201 03:13:32.932595 4880 generic.go:334] "Generic (PLEG): container finished" podID="69a60673-7e16-4057-8c8b-1c0b81de2a32" containerID="0c0fb2cac9fc5c19b21b0512ca3ba60aaae41ca8e5c2f0942eecbfdfe896212e" exitCode=0 Dec 01 03:13:32 crc kubenswrapper[4880]: I1201 03:13:32.932694 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ztwtt" event={"ID":"69a60673-7e16-4057-8c8b-1c0b81de2a32","Type":"ContainerDied","Data":"0c0fb2cac9fc5c19b21b0512ca3ba60aaae41ca8e5c2f0942eecbfdfe896212e"} Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.344644 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.532933 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-config-data\") pod \"69a60673-7e16-4057-8c8b-1c0b81de2a32\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.533109 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnjqz\" (UniqueName: \"kubernetes.io/projected/69a60673-7e16-4057-8c8b-1c0b81de2a32-kube-api-access-rnjqz\") pod \"69a60673-7e16-4057-8c8b-1c0b81de2a32\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.533165 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-combined-ca-bundle\") pod \"69a60673-7e16-4057-8c8b-1c0b81de2a32\" (UID: \"69a60673-7e16-4057-8c8b-1c0b81de2a32\") " Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.544157 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a60673-7e16-4057-8c8b-1c0b81de2a32-kube-api-access-rnjqz" (OuterVolumeSpecName: "kube-api-access-rnjqz") pod "69a60673-7e16-4057-8c8b-1c0b81de2a32" (UID: "69a60673-7e16-4057-8c8b-1c0b81de2a32"). InnerVolumeSpecName "kube-api-access-rnjqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.581155 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69a60673-7e16-4057-8c8b-1c0b81de2a32" (UID: "69a60673-7e16-4057-8c8b-1c0b81de2a32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.611185 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-config-data" (OuterVolumeSpecName: "config-data") pod "69a60673-7e16-4057-8c8b-1c0b81de2a32" (UID: "69a60673-7e16-4057-8c8b-1c0b81de2a32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.635695 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.635741 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnjqz\" (UniqueName: \"kubernetes.io/projected/69a60673-7e16-4057-8c8b-1c0b81de2a32-kube-api-access-rnjqz\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.635762 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69a60673-7e16-4057-8c8b-1c0b81de2a32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.957926 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ztwtt" event={"ID":"69a60673-7e16-4057-8c8b-1c0b81de2a32","Type":"ContainerDied","Data":"2177f446e51196c3c53a8e0dfb9c1f2b79f413bccd0545bce54d4fd7457631dc"} Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.957968 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2177f446e51196c3c53a8e0dfb9c1f2b79f413bccd0545bce54d4fd7457631dc" Dec 01 03:13:34 crc kubenswrapper[4880]: I1201 03:13:34.957970 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ztwtt" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.249966 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c4d4946ff-8q56m"] Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.250756 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60589a93-d998-4636-9220-053ff2f8384c" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.250858 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="60589a93-d998-4636-9220-053ff2f8384c" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.250962 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a60673-7e16-4057-8c8b-1c0b81de2a32" containerName="keystone-db-sync" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.251035 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a60673-7e16-4057-8c8b-1c0b81de2a32" containerName="keystone-db-sync" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.251097 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.251159 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.251228 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24dae40-463d-4451-a741-bca4504d68e8" containerName="dnsmasq-dns" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.251285 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24dae40-463d-4451-a741-bca4504d68e8" containerName="dnsmasq-dns" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.251347 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c09d440-d3ba-4284-be20-bd4853fdbd6a" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.251403 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c09d440-d3ba-4284-be20-bd4853fdbd6a" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.251467 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a939664f-b676-4f49-9e2f-69dc060cd7aa" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.251523 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a939664f-b676-4f49-9e2f-69dc060cd7aa" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.251576 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f183f4ac-adb3-4020-80c4-a06486c2976f" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.251629 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f183f4ac-adb3-4020-80c4-a06486c2976f" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.251695 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2943e7b1-9d51-4f2e-b02f-f6725dd63c74" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.251760 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2943e7b1-9d51-4f2e-b02f-f6725dd63c74" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.251830 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24dae40-463d-4451-a741-bca4504d68e8" containerName="init" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.251898 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24dae40-463d-4451-a741-bca4504d68e8" containerName="init" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.251960 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7554d9ac-da16-4f66-8174-cec776c1cb09" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252021 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7554d9ac-da16-4f66-8174-cec776c1cb09" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.252088 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55f1968-78c3-4b8d-9fa9-2b2807665167" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252151 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55f1968-78c3-4b8d-9fa9-2b2807665167" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252357 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252426 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f183f4ac-adb3-4020-80c4-a06486c2976f" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252488 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a60673-7e16-4057-8c8b-1c0b81de2a32" containerName="keystone-db-sync" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252553 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="7554d9ac-da16-4f66-8174-cec776c1cb09" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252613 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c09d440-d3ba-4284-be20-bd4853fdbd6a" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252682 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24dae40-463d-4451-a741-bca4504d68e8" containerName="dnsmasq-dns" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252740 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="2943e7b1-9d51-4f2e-b02f-f6725dd63c74" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252805 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a939664f-b676-4f49-9e2f-69dc060cd7aa" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252888 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d55f1968-78c3-4b8d-9fa9-2b2807665167" containerName="mariadb-account-create-update" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.252962 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="60589a93-d998-4636-9220-053ff2f8384c" containerName="mariadb-database-create" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.253810 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.260006 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2bcnx"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.261883 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.276536 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.276575 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.276618 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hsw47" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.276696 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.277101 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.314145 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c4d4946ff-8q56m"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348520 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-sb\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348580 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-nb\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348641 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-combined-ca-bundle\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348660 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l684g\" (UniqueName: \"kubernetes.io/projected/0439b521-3ec7-4d91-8eeb-3b18e0350c20-kube-api-access-l684g\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348687 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-config-data\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348710 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-credential-keys\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348733 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-config\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348754 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-fernet-keys\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348771 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-scripts\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348792 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-swift-storage-0\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348830 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-svc\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.348854 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjg2l\" (UniqueName: \"kubernetes.io/projected/e066ca3a-3c05-43c9-9466-688175328549-kube-api-access-zjg2l\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.352338 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2bcnx"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.450789 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-nb\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.450896 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-combined-ca-bundle\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.450918 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l684g\" (UniqueName: \"kubernetes.io/projected/0439b521-3ec7-4d91-8eeb-3b18e0350c20-kube-api-access-l684g\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.450940 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-config-data\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.450959 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-credential-keys\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.450981 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-config\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.451001 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-fernet-keys\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.451019 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-scripts\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.451037 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-swift-storage-0\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.451070 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-svc\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.451091 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjg2l\" (UniqueName: \"kubernetes.io/projected/e066ca3a-3c05-43c9-9466-688175328549-kube-api-access-zjg2l\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.451112 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-sb\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.452168 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-config\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.452193 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-nb\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.452226 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-sb\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.453106 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-svc\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.453686 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-swift-storage-0\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.460564 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-fernet-keys\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.463845 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-credential-keys\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.469290 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-combined-ca-bundle\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.470422 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-scripts\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.471585 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-config-data\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.478517 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjg2l\" (UniqueName: \"kubernetes.io/projected/e066ca3a-3c05-43c9-9466-688175328549-kube-api-access-zjg2l\") pod \"keystone-bootstrap-2bcnx\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.483971 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l684g\" (UniqueName: \"kubernetes.io/projected/0439b521-3ec7-4d91-8eeb-3b18e0350c20-kube-api-access-l684g\") pod \"dnsmasq-dns-6c4d4946ff-8q56m\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.577203 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-lczzw"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.578262 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.589990 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-mgzm4"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.594592 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: W1201 03:13:35.594606 4880 reflector.go:561] object-"openstack"/"cinder-config-data": failed to list *v1.Secret: secrets "cinder-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 01 03:13:35 crc kubenswrapper[4880]: E1201 03:13:35.594661 4880 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.598726 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-jclgj" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.604303 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.604516 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.613403 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.613585 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6kc8g" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.614825 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.623688 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lczzw"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.644961 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-798dcf488c-4k96z"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.646595 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.654749 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mgzm4"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.688301 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.688556 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.688699 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.688811 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tvfdd" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.714821 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-798dcf488c-4k96z"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.743983 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8llx4"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.744986 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756625 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-combined-ca-bundle\") pod \"heat-db-sync-mgzm4\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756660 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd7c5af-516a-4215-8ff3-73c83a234c97-logs\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756675 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-config-data\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756714 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzwb\" (UniqueName: \"kubernetes.io/projected/81ee6695-1440-4087-b17a-0af2371eceed-kube-api-access-8gzwb\") pod \"heat-db-sync-mgzm4\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756736 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-etc-machine-id\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756754 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-combined-ca-bundle\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756771 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-db-sync-config-data\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756785 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvhq\" (UniqueName: \"kubernetes.io/projected/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-kube-api-access-xcvhq\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756805 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-scripts\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756821 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-scripts\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756892 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-config-data\") pod \"heat-db-sync-mgzm4\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756925 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0bd7c5af-516a-4215-8ff3-73c83a234c97-horizon-secret-key\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756950 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-config-data\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.756969 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x4t2\" (UniqueName: \"kubernetes.io/projected/0bd7c5af-516a-4215-8ff3-73c83a234c97-kube-api-access-9x4t2\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.764582 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.764777 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.764936 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ljj48" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.788541 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8llx4"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859473 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdfvh\" (UniqueName: \"kubernetes.io/projected/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-kube-api-access-rdfvh\") pod \"neutron-db-sync-8llx4\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859520 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-scripts\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859545 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-scripts\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859592 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-config\") pod \"neutron-db-sync-8llx4\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859641 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-config-data\") pod \"heat-db-sync-mgzm4\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859680 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0bd7c5af-516a-4215-8ff3-73c83a234c97-horizon-secret-key\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859712 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-config-data\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859735 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x4t2\" (UniqueName: \"kubernetes.io/projected/0bd7c5af-516a-4215-8ff3-73c83a234c97-kube-api-access-9x4t2\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859759 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-combined-ca-bundle\") pod \"heat-db-sync-mgzm4\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859781 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd7c5af-516a-4215-8ff3-73c83a234c97-logs\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859800 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-config-data\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859843 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gzwb\" (UniqueName: \"kubernetes.io/projected/81ee6695-1440-4087-b17a-0af2371eceed-kube-api-access-8gzwb\") pod \"heat-db-sync-mgzm4\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859863 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-etc-machine-id\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859898 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-combined-ca-bundle\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859914 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-db-sync-config-data\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859930 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcvhq\" (UniqueName: \"kubernetes.io/projected/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-kube-api-access-xcvhq\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.859951 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-combined-ca-bundle\") pod \"neutron-db-sync-8llx4\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.862550 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-gxfjq"] Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.863820 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.864532 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-etc-machine-id\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.865068 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd7c5af-516a-4215-8ff3-73c83a234c97-logs\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.865966 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-scripts\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.866185 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-config-data\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.874202 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-scripts\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.887846 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-config-data\") pod \"heat-db-sync-mgzm4\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.900495 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8nbnh" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.900669 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.901811 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.904068 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0bd7c5af-516a-4215-8ff3-73c83a234c97-horizon-secret-key\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.914487 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-combined-ca-bundle\") pod \"heat-db-sync-mgzm4\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.915254 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-combined-ca-bundle\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.961527 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-combined-ca-bundle\") pod \"neutron-db-sync-8llx4\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.961563 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdfvh\" (UniqueName: \"kubernetes.io/projected/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-kube-api-access-rdfvh\") pod \"neutron-db-sync-8llx4\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.961609 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqcxx\" (UniqueName: \"kubernetes.io/projected/fb21b71d-303a-4e92-9086-789ded0f11fa-kube-api-access-sqcxx\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.961638 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-config\") pod \"neutron-db-sync-8llx4\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.961670 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-combined-ca-bundle\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.961694 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb21b71d-303a-4e92-9086-789ded0f11fa-logs\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.961731 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-config-data\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.961793 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-scripts\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.969655 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-config\") pod \"neutron-db-sync-8llx4\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.972021 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gzwb\" (UniqueName: \"kubernetes.io/projected/81ee6695-1440-4087-b17a-0af2371eceed-kube-api-access-8gzwb\") pod \"heat-db-sync-mgzm4\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.979494 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-combined-ca-bundle\") pod \"neutron-db-sync-8llx4\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.988399 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcvhq\" (UniqueName: \"kubernetes.io/projected/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-kube-api-access-xcvhq\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:35 crc kubenswrapper[4880]: I1201 03:13:35.997430 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x4t2\" (UniqueName: \"kubernetes.io/projected/0bd7c5af-516a-4215-8ff3-73c83a234c97-kube-api-access-9x4t2\") pod \"horizon-798dcf488c-4k96z\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.016913 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c4d4946ff-8q56m"] Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.027914 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gxfjq"] Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.036960 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-585764b957-fcwmr"] Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.038481 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.046324 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zncgf"] Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.047379 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.051749 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.051817 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdfvh\" (UniqueName: \"kubernetes.io/projected/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-kube-api-access-rdfvh\") pod \"neutron-db-sync-8llx4\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.053735 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.064668 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-scripts\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.064737 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqcxx\" (UniqueName: \"kubernetes.io/projected/fb21b71d-303a-4e92-9086-789ded0f11fa-kube-api-access-sqcxx\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.064785 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-combined-ca-bundle\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.064810 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb21b71d-303a-4e92-9086-789ded0f11fa-logs\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.064851 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-config-data\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.069245 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb21b71d-303a-4e92-9086-789ded0f11fa-logs\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.071851 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-scripts\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.072097 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-combined-ca-bundle\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.072553 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-config-data\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.086816 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.087033 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.087148 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2qx9w" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.087239 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.097746 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.138433 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8llx4" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.144803 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqcxx\" (UniqueName: \"kubernetes.io/projected/fb21b71d-303a-4e92-9086-789ded0f11fa-kube-api-access-sqcxx\") pod \"placement-db-sync-gxfjq\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.167777 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.167836 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-config-data\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.167855 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f94a9fea-b4b8-4f53-8190-9ababbd17d49-logs\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.167988 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jbds\" (UniqueName: \"kubernetes.io/projected/010f41a5-3ac7-48d3-b20c-e9b8add221ca-kube-api-access-5jbds\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168006 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-run-httpd\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168035 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hd8t\" (UniqueName: \"kubernetes.io/projected/d3288e77-4e64-48d4-995e-93abe07bf1bd-kube-api-access-4hd8t\") pod \"barbican-db-sync-zncgf\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168062 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-log-httpd\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168082 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-scripts\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168101 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-db-sync-config-data\") pod \"barbican-db-sync-zncgf\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168119 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-config-data\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168134 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78k4f\" (UniqueName: \"kubernetes.io/projected/f94a9fea-b4b8-4f53-8190-9ababbd17d49-kube-api-access-78k4f\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168184 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-combined-ca-bundle\") pod \"barbican-db-sync-zncgf\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168204 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168226 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f94a9fea-b4b8-4f53-8190-9ababbd17d49-horizon-secret-key\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.168247 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-scripts\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.174940 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zncgf"] Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.224332 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-585764b957-fcwmr"] Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.231206 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gxfjq" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.238708 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgzm4" Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.254481 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8669fc467f-m9rgg"] Dec 01 03:13:36 crc kubenswrapper[4880]: I1201 03:13:36.256184 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271133 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hd8t\" (UniqueName: \"kubernetes.io/projected/d3288e77-4e64-48d4-995e-93abe07bf1bd-kube-api-access-4hd8t\") pod \"barbican-db-sync-zncgf\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271184 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-log-httpd\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271208 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-scripts\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271234 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-db-sync-config-data\") pod \"barbican-db-sync-zncgf\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271260 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-config-data\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271274 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78k4f\" (UniqueName: \"kubernetes.io/projected/f94a9fea-b4b8-4f53-8190-9ababbd17d49-kube-api-access-78k4f\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271326 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-combined-ca-bundle\") pod \"barbican-db-sync-zncgf\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271348 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271369 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f94a9fea-b4b8-4f53-8190-9ababbd17d49-horizon-secret-key\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271385 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-scripts\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271411 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271433 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-config-data\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271449 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f94a9fea-b4b8-4f53-8190-9ababbd17d49-logs\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271483 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jbds\" (UniqueName: \"kubernetes.io/projected/010f41a5-3ac7-48d3-b20c-e9b8add221ca-kube-api-access-5jbds\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.271498 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-run-httpd\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.292255 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-run-httpd\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.294363 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.294387 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8669fc467f-m9rgg"] Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.294833 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-log-httpd\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.300890 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-config-data\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.301330 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-scripts\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.306098 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f94a9fea-b4b8-4f53-8190-9ababbd17d49-logs\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.321438 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-db-sync-config-data\") pod \"barbican-db-sync-zncgf\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.331827 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-combined-ca-bundle\") pod \"barbican-db-sync-zncgf\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.356529 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hd8t\" (UniqueName: \"kubernetes.io/projected/d3288e77-4e64-48d4-995e-93abe07bf1bd-kube-api-access-4hd8t\") pod \"barbican-db-sync-zncgf\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.367973 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.368777 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-config-data\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.373524 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78k4f\" (UniqueName: \"kubernetes.io/projected/f94a9fea-b4b8-4f53-8190-9ababbd17d49-kube-api-access-78k4f\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.373925 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-sb\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.374027 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-nb\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.374090 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-swift-storage-0\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.374106 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpdt\" (UniqueName: \"kubernetes.io/projected/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-kube-api-access-bgpdt\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.374145 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-config\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.374159 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-svc\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.374894 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f94a9fea-b4b8-4f53-8190-9ababbd17d49-horizon-secret-key\") pod \"horizon-585764b957-fcwmr\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.375109 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.391542 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jbds\" (UniqueName: \"kubernetes.io/projected/010f41a5-3ac7-48d3-b20c-e9b8add221ca-kube-api-access-5jbds\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.413262 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zncgf" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.448426 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-scripts\") pod \"ceilometer-0\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.491628 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-swift-storage-0\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.491693 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpdt\" (UniqueName: \"kubernetes.io/projected/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-kube-api-access-bgpdt\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.491739 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-config\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.491765 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-svc\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.491816 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-sb\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.494178 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-swift-storage-0\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.494925 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-config\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.495283 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-svc\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.527687 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-sb\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.562145 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-nb\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.563114 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-nb\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.610247 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpdt\" (UniqueName: \"kubernetes.io/projected/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-kube-api-access-bgpdt\") pod \"dnsmasq-dns-8669fc467f-m9rgg\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.660667 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.674630 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.690590 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.692767 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.712942 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.713185 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.713380 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.713632 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kg2c4" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.713860 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.724518 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-db-sync-config-data\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.726350 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.729253 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-config-data\") pod \"cinder-db-sync-lczzw\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.735725 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.781721 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-scripts\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.782559 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.782585 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.782624 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-config-data\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.782704 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.782736 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-logs\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.782749 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.782814 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2slw\" (UniqueName: \"kubernetes.io/projected/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-kube-api-access-w2slw\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.836514 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lczzw" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.887100 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-logs\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.887131 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.887189 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2slw\" (UniqueName: \"kubernetes.io/projected/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-kube-api-access-w2slw\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.887247 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-scripts\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.887268 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.887289 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.887315 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-config-data\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.887362 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.887746 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.888757 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-logs\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.891534 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.892533 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.899297 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.901058 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-scripts\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.901743 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-config-data\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.931472 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2slw\" (UniqueName: \"kubernetes.io/projected/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-kube-api-access-w2slw\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:36.940998 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.046386 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.047940 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.052883 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.052959 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.068676 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.147451 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.193156 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.193630 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-logs\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.193660 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzv9t\" (UniqueName: \"kubernetes.io/projected/381e837e-ca4b-4b5a-8095-1f2894e75c58-kube-api-access-nzv9t\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.193730 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.193793 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.193818 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.193886 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.193910 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.295955 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.296635 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.296659 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.296686 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.296720 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-logs\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.296737 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzv9t\" (UniqueName: \"kubernetes.io/projected/381e837e-ca4b-4b5a-8095-1f2894e75c58-kube-api-access-nzv9t\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.296789 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.296835 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.297285 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.297742 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-logs\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.297947 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.301288 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.307498 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.308067 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-config-data\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.315500 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-scripts\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.328694 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzv9t\" (UniqueName: \"kubernetes.io/projected/381e837e-ca4b-4b5a-8095-1f2894e75c58-kube-api-access-nzv9t\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.344017 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.393114 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:13:37 crc kubenswrapper[4880]: I1201 03:13:37.611118 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2bcnx"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.013324 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2bcnx" event={"ID":"e066ca3a-3c05-43c9-9466-688175328549","Type":"ContainerStarted","Data":"fbaad87f97383177ef74287211bef8bf252dc80c0984dd3baf220229501ec8f0"} Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.013728 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2bcnx" event={"ID":"e066ca3a-3c05-43c9-9466-688175328549","Type":"ContainerStarted","Data":"f37e195706903339ed8f9922e70af321b5e1290602f266703e615085a6e4a44b"} Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.035173 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2bcnx" podStartSLOduration=3.035156704 podStartE2EDuration="3.035156704s" podCreationTimestamp="2025-12-01 03:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:13:38.028743503 +0000 UTC m=+1047.539997875" watchObservedRunningTime="2025-12-01 03:13:38.035156704 +0000 UTC m=+1047.546411066" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.365767 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.419760 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-585764b957-fcwmr"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.452196 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mgzm4"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.487192 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b8bdd5655-fwp7z"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.505950 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.506065 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.521818 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b8bdd5655-fwp7z"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.596504 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gxfjq"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.626678 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.627124 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-config-data\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.628545 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed6b0956-20fc-4437-a91e-d263641f40f0-logs\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.628677 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22r9\" (UniqueName: \"kubernetes.io/projected/ed6b0956-20fc-4437-a91e-d263641f40f0-kube-api-access-f22r9\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.628775 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed6b0956-20fc-4437-a91e-d263641f40f0-horizon-secret-key\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.643286 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-scripts\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.640252 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lczzw"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.673305 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zncgf"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.683798 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-585764b957-fcwmr"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.697686 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-798dcf488c-4k96z"] Dec 01 03:13:38 crc kubenswrapper[4880]: W1201 03:13:38.703628 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod010f41a5_3ac7_48d3_b20c_e9b8add221ca.slice/crio-2798d4d74779468d8aebcc914157aa6efcdf1b3f21147abc667e58286400338e WatchSource:0}: Error finding container 2798d4d74779468d8aebcc914157aa6efcdf1b3f21147abc667e58286400338e: Status 404 returned error can't find the container with id 2798d4d74779468d8aebcc914157aa6efcdf1b3f21147abc667e58286400338e Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.703726 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.720928 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c4d4946ff-8q56m"] Dec 01 03:13:38 crc kubenswrapper[4880]: W1201 03:13:38.723184 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb21b71d_303a_4e92_9086_789ded0f11fa.slice/crio-d7388ef2b1420b0f36a4f6bec9fc3723b76a63b2a3ebb038fb07af043c6dd0bb WatchSource:0}: Error finding container d7388ef2b1420b0f36a4f6bec9fc3723b76a63b2a3ebb038fb07af043c6dd0bb: Status 404 returned error can't find the container with id d7388ef2b1420b0f36a4f6bec9fc3723b76a63b2a3ebb038fb07af043c6dd0bb Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.746555 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-config-data\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.746657 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed6b0956-20fc-4437-a91e-d263641f40f0-logs\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.746737 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22r9\" (UniqueName: \"kubernetes.io/projected/ed6b0956-20fc-4437-a91e-d263641f40f0-kube-api-access-f22r9\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.746804 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed6b0956-20fc-4437-a91e-d263641f40f0-horizon-secret-key\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.746859 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-scripts\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.747688 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-scripts\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.748679 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-config-data\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.748971 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed6b0956-20fc-4437-a91e-d263641f40f0-logs\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.778307 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22r9\" (UniqueName: \"kubernetes.io/projected/ed6b0956-20fc-4437-a91e-d263641f40f0-kube-api-access-f22r9\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.782242 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed6b0956-20fc-4437-a91e-d263641f40f0-horizon-secret-key\") pod \"horizon-6b8bdd5655-fwp7z\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.819976 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8llx4"] Dec 01 03:13:38 crc kubenswrapper[4880]: W1201 03:13:38.821152 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0439b521_3ec7_4d91_8eeb_3b18e0350c20.slice/crio-00e7b88cd46e4c52fe1ae599a2fd20b8dc12d094b4f9edbfaab09412908df2a7 WatchSource:0}: Error finding container 00e7b88cd46e4c52fe1ae599a2fd20b8dc12d094b4f9edbfaab09412908df2a7: Status 404 returned error can't find the container with id 00e7b88cd46e4c52fe1ae599a2fd20b8dc12d094b4f9edbfaab09412908df2a7 Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.835449 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.842272 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8669fc467f-m9rgg"] Dec 01 03:13:38 crc kubenswrapper[4880]: I1201 03:13:38.902723 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.024248 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d","Type":"ContainerStarted","Data":"4008641adaebbd33da35a20b48ecad22e61114a7d0a922b67b01ec13a75f36b8"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.039061 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-798dcf488c-4k96z" event={"ID":"0bd7c5af-516a-4215-8ff3-73c83a234c97","Type":"ContainerStarted","Data":"15fc83ba9d4e6147ca865647b03a5b8f5570c861fd23c1c3407c5717b667ad44"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.047570 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lczzw" event={"ID":"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31","Type":"ContainerStarted","Data":"a7722f7c911aaf4d1eb32383c7f023e42e2d30dbd358600b1409bd64576a5739"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.054480 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gxfjq" event={"ID":"fb21b71d-303a-4e92-9086-789ded0f11fa","Type":"ContainerStarted","Data":"d7388ef2b1420b0f36a4f6bec9fc3723b76a63b2a3ebb038fb07af043c6dd0bb"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.061260 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgzm4" event={"ID":"81ee6695-1440-4087-b17a-0af2371eceed","Type":"ContainerStarted","Data":"1ec45f1612c1c13fb15157c25792dab2a6b203a187f57a89e31ede08bf7d3286"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.108171 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8llx4" event={"ID":"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d","Type":"ContainerStarted","Data":"8f86d89ebe0adef0688d9491ba3b915da2a9de42d3932c42856f31c3c135c724"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.111751 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"010f41a5-3ac7-48d3-b20c-e9b8add221ca","Type":"ContainerStarted","Data":"2798d4d74779468d8aebcc914157aa6efcdf1b3f21147abc667e58286400338e"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.133484 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" event={"ID":"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9","Type":"ContainerStarted","Data":"e4b2e7ff44b97dc1d2f69d4c21b541fef511b27fcc837c80bb98ccacdcce3e36"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.137433 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" event={"ID":"0439b521-3ec7-4d91-8eeb-3b18e0350c20","Type":"ContainerStarted","Data":"00e7b88cd46e4c52fe1ae599a2fd20b8dc12d094b4f9edbfaab09412908df2a7"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.138529 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-585764b957-fcwmr" event={"ID":"f94a9fea-b4b8-4f53-8190-9ababbd17d49","Type":"ContainerStarted","Data":"b3c02865a1b6497fb67ec9b5d5e71a8e9d45bba4955ad836646f00d8d10fa866"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.139696 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zncgf" event={"ID":"d3288e77-4e64-48d4-995e-93abe07bf1bd","Type":"ContainerStarted","Data":"f2447e9796430323003463ed257fc3e6a05173ec2c030486c13ff35cc0c9eb37"} Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.424475 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b8bdd5655-fwp7z"] Dec 01 03:13:39 crc kubenswrapper[4880]: I1201 03:13:39.832231 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:13:39 crc kubenswrapper[4880]: W1201 03:13:39.857259 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod381e837e_ca4b_4b5a_8095_1f2894e75c58.slice/crio-d7fabc67e65a753aef11dff5d9ff1e5cbbbdd127c0545917c99ccde9ab7b3740 WatchSource:0}: Error finding container d7fabc67e65a753aef11dff5d9ff1e5cbbbdd127c0545917c99ccde9ab7b3740: Status 404 returned error can't find the container with id d7fabc67e65a753aef11dff5d9ff1e5cbbbdd127c0545917c99ccde9ab7b3740 Dec 01 03:13:40 crc kubenswrapper[4880]: I1201 03:13:40.169897 4880 generic.go:334] "Generic (PLEG): container finished" podID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" containerID="7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744" exitCode=0 Dec 01 03:13:40 crc kubenswrapper[4880]: I1201 03:13:40.169964 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" event={"ID":"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9","Type":"ContainerDied","Data":"7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744"} Dec 01 03:13:40 crc kubenswrapper[4880]: I1201 03:13:40.177179 4880 generic.go:334] "Generic (PLEG): container finished" podID="0439b521-3ec7-4d91-8eeb-3b18e0350c20" containerID="853e716624a0432f7fbd08fa7a29de523c33d51aa9e496f28702f31b0da0ffcc" exitCode=0 Dec 01 03:13:40 crc kubenswrapper[4880]: I1201 03:13:40.177223 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" event={"ID":"0439b521-3ec7-4d91-8eeb-3b18e0350c20","Type":"ContainerDied","Data":"853e716624a0432f7fbd08fa7a29de523c33d51aa9e496f28702f31b0da0ffcc"} Dec 01 03:13:40 crc kubenswrapper[4880]: I1201 03:13:40.189321 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"381e837e-ca4b-4b5a-8095-1f2894e75c58","Type":"ContainerStarted","Data":"d7fabc67e65a753aef11dff5d9ff1e5cbbbdd127c0545917c99ccde9ab7b3740"} Dec 01 03:13:40 crc kubenswrapper[4880]: I1201 03:13:40.221687 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b8bdd5655-fwp7z" event={"ID":"ed6b0956-20fc-4437-a91e-d263641f40f0","Type":"ContainerStarted","Data":"07130ae9bcc8e6f71076a097e519a324577a8503e1c3ef3e3f27b09ade4be40d"} Dec 01 03:13:40 crc kubenswrapper[4880]: I1201 03:13:40.228808 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8llx4" event={"ID":"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d","Type":"ContainerStarted","Data":"b7a897e083947741245c6b8affa4e798206c5d4101e7dfae27863946a196d592"} Dec 01 03:13:40 crc kubenswrapper[4880]: I1201 03:13:40.271389 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-8llx4" podStartSLOduration=5.271373052 podStartE2EDuration="5.271373052s" podCreationTimestamp="2025-12-01 03:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:13:40.263685049 +0000 UTC m=+1049.774939421" watchObservedRunningTime="2025-12-01 03:13:40.271373052 +0000 UTC m=+1049.782627424" Dec 01 03:13:40 crc kubenswrapper[4880]: I1201 03:13:40.971214 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.109884 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-config\") pod \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.109958 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l684g\" (UniqueName: \"kubernetes.io/projected/0439b521-3ec7-4d91-8eeb-3b18e0350c20-kube-api-access-l684g\") pod \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.109999 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-svc\") pod \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.110023 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-sb\") pod \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.110065 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-swift-storage-0\") pod \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.110110 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-nb\") pod \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\" (UID: \"0439b521-3ec7-4d91-8eeb-3b18e0350c20\") " Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.121013 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0439b521-3ec7-4d91-8eeb-3b18e0350c20-kube-api-access-l684g" (OuterVolumeSpecName: "kube-api-access-l684g") pod "0439b521-3ec7-4d91-8eeb-3b18e0350c20" (UID: "0439b521-3ec7-4d91-8eeb-3b18e0350c20"). InnerVolumeSpecName "kube-api-access-l684g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.143660 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0439b521-3ec7-4d91-8eeb-3b18e0350c20" (UID: "0439b521-3ec7-4d91-8eeb-3b18e0350c20"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.152336 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0439b521-3ec7-4d91-8eeb-3b18e0350c20" (UID: "0439b521-3ec7-4d91-8eeb-3b18e0350c20"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.165636 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0439b521-3ec7-4d91-8eeb-3b18e0350c20" (UID: "0439b521-3ec7-4d91-8eeb-3b18e0350c20"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.188453 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-config" (OuterVolumeSpecName: "config") pod "0439b521-3ec7-4d91-8eeb-3b18e0350c20" (UID: "0439b521-3ec7-4d91-8eeb-3b18e0350c20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.208991 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0439b521-3ec7-4d91-8eeb-3b18e0350c20" (UID: "0439b521-3ec7-4d91-8eeb-3b18e0350c20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.212529 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.212547 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l684g\" (UniqueName: \"kubernetes.io/projected/0439b521-3ec7-4d91-8eeb-3b18e0350c20-kube-api-access-l684g\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.212557 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.212566 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.212575 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.212583 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0439b521-3ec7-4d91-8eeb-3b18e0350c20-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.244470 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.244518 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c4d4946ff-8q56m" event={"ID":"0439b521-3ec7-4d91-8eeb-3b18e0350c20","Type":"ContainerDied","Data":"00e7b88cd46e4c52fe1ae599a2fd20b8dc12d094b4f9edbfaab09412908df2a7"} Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.244549 4880 scope.go:117] "RemoveContainer" containerID="853e716624a0432f7fbd08fa7a29de523c33d51aa9e496f28702f31b0da0ffcc" Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.343969 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c4d4946ff-8q56m"] Dec 01 03:13:41 crc kubenswrapper[4880]: I1201 03:13:41.352828 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c4d4946ff-8q56m"] Dec 01 03:13:42 crc kubenswrapper[4880]: I1201 03:13:42.264301 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d","Type":"ContainerStarted","Data":"e56af5014235d8a8bdbbcfab205a7286265b867d6b0dd3bb36d7eef94448d3fb"} Dec 01 03:13:42 crc kubenswrapper[4880]: I1201 03:13:42.282908 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" event={"ID":"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9","Type":"ContainerStarted","Data":"f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183"} Dec 01 03:13:42 crc kubenswrapper[4880]: I1201 03:13:42.282983 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:42 crc kubenswrapper[4880]: I1201 03:13:42.292621 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"381e837e-ca4b-4b5a-8095-1f2894e75c58","Type":"ContainerStarted","Data":"2d6185e84047674b4a2602852cc3be2cbcc128820fa83ccde51b2537064f3783"} Dec 01 03:13:42 crc kubenswrapper[4880]: I1201 03:13:42.313446 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" podStartSLOduration=6.313426131 podStartE2EDuration="6.313426131s" podCreationTimestamp="2025-12-01 03:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:13:42.308146079 +0000 UTC m=+1051.819400451" watchObservedRunningTime="2025-12-01 03:13:42.313426131 +0000 UTC m=+1051.824680503" Dec 01 03:13:42 crc kubenswrapper[4880]: I1201 03:13:42.798507 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0439b521-3ec7-4d91-8eeb-3b18e0350c20" path="/var/lib/kubelet/pods/0439b521-3ec7-4d91-8eeb-3b18e0350c20/volumes" Dec 01 03:13:43 crc kubenswrapper[4880]: I1201 03:13:43.310972 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d","Type":"ContainerStarted","Data":"414819b743e4d62bce116c64ce9d9b0648de297b47b867cb07f9b57ea476e794"} Dec 01 03:13:43 crc kubenswrapper[4880]: I1201 03:13:43.311293 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerName="glance-log" containerID="cri-o://e56af5014235d8a8bdbbcfab205a7286265b867d6b0dd3bb36d7eef94448d3fb" gracePeriod=30 Dec 01 03:13:43 crc kubenswrapper[4880]: I1201 03:13:43.311685 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerName="glance-httpd" containerID="cri-o://414819b743e4d62bce116c64ce9d9b0648de297b47b867cb07f9b57ea476e794" gracePeriod=30 Dec 01 03:13:43 crc kubenswrapper[4880]: I1201 03:13:43.333002 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.332987875 podStartE2EDuration="7.332987875s" podCreationTimestamp="2025-12-01 03:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:13:43.329697102 +0000 UTC m=+1052.840951484" watchObservedRunningTime="2025-12-01 03:13:43.332987875 +0000 UTC m=+1052.844242247" Dec 01 03:13:43 crc kubenswrapper[4880]: I1201 03:13:43.347412 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerName="glance-log" containerID="cri-o://2d6185e84047674b4a2602852cc3be2cbcc128820fa83ccde51b2537064f3783" gracePeriod=30 Dec 01 03:13:43 crc kubenswrapper[4880]: I1201 03:13:43.347685 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"381e837e-ca4b-4b5a-8095-1f2894e75c58","Type":"ContainerStarted","Data":"59bb1ff9f013e5cd26f215930f6d7e49489a720a18c1eefc68dadfa194ee7603"} Dec 01 03:13:43 crc kubenswrapper[4880]: I1201 03:13:43.347944 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerName="glance-httpd" containerID="cri-o://59bb1ff9f013e5cd26f215930f6d7e49489a720a18c1eefc68dadfa194ee7603" gracePeriod=30 Dec 01 03:13:43 crc kubenswrapper[4880]: I1201 03:13:43.396450 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.396427429 podStartE2EDuration="8.396427429s" podCreationTimestamp="2025-12-01 03:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:13:43.39291348 +0000 UTC m=+1052.904167872" watchObservedRunningTime="2025-12-01 03:13:43.396427429 +0000 UTC m=+1052.907681801" Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.360708 4880 generic.go:334] "Generic (PLEG): container finished" podID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerID="59bb1ff9f013e5cd26f215930f6d7e49489a720a18c1eefc68dadfa194ee7603" exitCode=0 Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.360742 4880 generic.go:334] "Generic (PLEG): container finished" podID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerID="2d6185e84047674b4a2602852cc3be2cbcc128820fa83ccde51b2537064f3783" exitCode=143 Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.360794 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"381e837e-ca4b-4b5a-8095-1f2894e75c58","Type":"ContainerDied","Data":"59bb1ff9f013e5cd26f215930f6d7e49489a720a18c1eefc68dadfa194ee7603"} Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.360841 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"381e837e-ca4b-4b5a-8095-1f2894e75c58","Type":"ContainerDied","Data":"2d6185e84047674b4a2602852cc3be2cbcc128820fa83ccde51b2537064f3783"} Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.364559 4880 generic.go:334] "Generic (PLEG): container finished" podID="e066ca3a-3c05-43c9-9466-688175328549" containerID="fbaad87f97383177ef74287211bef8bf252dc80c0984dd3baf220229501ec8f0" exitCode=0 Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.364616 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2bcnx" event={"ID":"e066ca3a-3c05-43c9-9466-688175328549","Type":"ContainerDied","Data":"fbaad87f97383177ef74287211bef8bf252dc80c0984dd3baf220229501ec8f0"} Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.366890 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d","Type":"ContainerDied","Data":"414819b743e4d62bce116c64ce9d9b0648de297b47b867cb07f9b57ea476e794"} Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.366770 4880 generic.go:334] "Generic (PLEG): container finished" podID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerID="414819b743e4d62bce116c64ce9d9b0648de297b47b867cb07f9b57ea476e794" exitCode=0 Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.367580 4880 generic.go:334] "Generic (PLEG): container finished" podID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerID="e56af5014235d8a8bdbbcfab205a7286265b867d6b0dd3bb36d7eef94448d3fb" exitCode=143 Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.367679 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d","Type":"ContainerDied","Data":"e56af5014235d8a8bdbbcfab205a7286265b867d6b0dd3bb36d7eef94448d3fb"} Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.912481 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-798dcf488c-4k96z"] Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.943428 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6ddc7fc844-5qd9h"] Dec 01 03:13:44 crc kubenswrapper[4880]: E1201 03:13:44.943786 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0439b521-3ec7-4d91-8eeb-3b18e0350c20" containerName="init" Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.943801 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="0439b521-3ec7-4d91-8eeb-3b18e0350c20" containerName="init" Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.943980 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="0439b521-3ec7-4d91-8eeb-3b18e0350c20" containerName="init" Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.944793 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.949137 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 01 03:13:44 crc kubenswrapper[4880]: I1201 03:13:44.990295 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6ddc7fc844-5qd9h"] Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.074222 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b8bdd5655-fwp7z"] Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.094988 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56cc96959b-rrjz7"] Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.095948 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-scripts\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.096025 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtxq\" (UniqueName: \"kubernetes.io/projected/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-kube-api-access-5xtxq\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.096099 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-tls-certs\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.096123 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-logs\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.096139 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-combined-ca-bundle\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.096155 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-config-data\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.096196 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-secret-key\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.096562 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.105991 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56cc96959b-rrjz7"] Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201625 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24a10152-f651-41de-9680-872d96690cd5-horizon-tls-certs\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201682 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc7l\" (UniqueName: \"kubernetes.io/projected/24a10152-f651-41de-9680-872d96690cd5-kube-api-access-7rc7l\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201704 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24a10152-f651-41de-9680-872d96690cd5-scripts\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201727 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xtxq\" (UniqueName: \"kubernetes.io/projected/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-kube-api-access-5xtxq\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201756 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a10152-f651-41de-9680-872d96690cd5-logs\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201802 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24a10152-f651-41de-9680-872d96690cd5-horizon-secret-key\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201837 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-tls-certs\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201858 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-logs\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201889 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-combined-ca-bundle\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201904 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-config-data\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201925 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24a10152-f651-41de-9680-872d96690cd5-config-data\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201964 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-secret-key\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.201980 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a10152-f651-41de-9680-872d96690cd5-combined-ca-bundle\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.202000 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-scripts\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.202691 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-scripts\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.204851 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-logs\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.205144 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-config-data\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.208267 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-tls-certs\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.210688 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-combined-ca-bundle\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.214106 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-secret-key\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.222428 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xtxq\" (UniqueName: \"kubernetes.io/projected/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-kube-api-access-5xtxq\") pod \"horizon-6ddc7fc844-5qd9h\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.275789 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.303180 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24a10152-f651-41de-9680-872d96690cd5-scripts\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.303238 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a10152-f651-41de-9680-872d96690cd5-logs\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.303286 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24a10152-f651-41de-9680-872d96690cd5-horizon-secret-key\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.303334 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24a10152-f651-41de-9680-872d96690cd5-config-data\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.303373 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a10152-f651-41de-9680-872d96690cd5-combined-ca-bundle\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.303404 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24a10152-f651-41de-9680-872d96690cd5-horizon-tls-certs\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.303437 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rc7l\" (UniqueName: \"kubernetes.io/projected/24a10152-f651-41de-9680-872d96690cd5-kube-api-access-7rc7l\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.304051 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24a10152-f651-41de-9680-872d96690cd5-logs\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.304326 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24a10152-f651-41de-9680-872d96690cd5-scripts\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.308335 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24a10152-f651-41de-9680-872d96690cd5-config-data\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.309470 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24a10152-f651-41de-9680-872d96690cd5-combined-ca-bundle\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.311830 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24a10152-f651-41de-9680-872d96690cd5-horizon-secret-key\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.319857 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rc7l\" (UniqueName: \"kubernetes.io/projected/24a10152-f651-41de-9680-872d96690cd5-kube-api-access-7rc7l\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.320193 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24a10152-f651-41de-9680-872d96690cd5-horizon-tls-certs\") pod \"horizon-56cc96959b-rrjz7\" (UID: \"24a10152-f651-41de-9680-872d96690cd5\") " pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:45 crc kubenswrapper[4880]: I1201 03:13:45.440349 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:13:46 crc kubenswrapper[4880]: I1201 03:13:46.663312 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:13:46 crc kubenswrapper[4880]: I1201 03:13:46.735726 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69456b8679-gnrn4"] Dec 01 03:13:46 crc kubenswrapper[4880]: I1201 03:13:46.736066 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="dnsmasq-dns" containerID="cri-o://5171f871e6d95428c049012148c9a0c79198b7125214c960436f3ea2c2af2217" gracePeriod=10 Dec 01 03:13:47 crc kubenswrapper[4880]: I1201 03:13:47.415346 4880 generic.go:334] "Generic (PLEG): container finished" podID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerID="5171f871e6d95428c049012148c9a0c79198b7125214c960436f3ea2c2af2217" exitCode=0 Dec 01 03:13:47 crc kubenswrapper[4880]: I1201 03:13:47.415391 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" event={"ID":"7950bc21-2f03-4e16-a9e7-2c76a48078df","Type":"ContainerDied","Data":"5171f871e6d95428c049012148c9a0c79198b7125214c960436f3ea2c2af2217"} Dec 01 03:13:51 crc kubenswrapper[4880]: I1201 03:13:51.572521 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Dec 01 03:13:53 crc kubenswrapper[4880]: E1201 03:13:53.526161 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-ceilometer-central:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:13:53 crc kubenswrapper[4880]: E1201 03:13:53.526431 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-ceilometer-central:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:13:53 crc kubenswrapper[4880]: E1201 03:13:53.526554 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-ceilometer-central:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n678h5fch678h65dhd9h9hd9h589h59dh569h5c7h58ch667h54dh7bhbfhc5h7bhcch65bh98h5b8h67fh55chf6hch64fhb9hb9h5fbh64h5c6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jbds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(010f41a5-3ac7-48d3-b20c-e9b8add221ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.602670 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.719817 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-config-data\") pod \"e066ca3a-3c05-43c9-9466-688175328549\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.719931 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-fernet-keys\") pod \"e066ca3a-3c05-43c9-9466-688175328549\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.720020 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-scripts\") pod \"e066ca3a-3c05-43c9-9466-688175328549\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.720040 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-combined-ca-bundle\") pod \"e066ca3a-3c05-43c9-9466-688175328549\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.720100 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjg2l\" (UniqueName: \"kubernetes.io/projected/e066ca3a-3c05-43c9-9466-688175328549-kube-api-access-zjg2l\") pod \"e066ca3a-3c05-43c9-9466-688175328549\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.720131 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-credential-keys\") pod \"e066ca3a-3c05-43c9-9466-688175328549\" (UID: \"e066ca3a-3c05-43c9-9466-688175328549\") " Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.724725 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e066ca3a-3c05-43c9-9466-688175328549" (UID: "e066ca3a-3c05-43c9-9466-688175328549"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.727321 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-scripts" (OuterVolumeSpecName: "scripts") pod "e066ca3a-3c05-43c9-9466-688175328549" (UID: "e066ca3a-3c05-43c9-9466-688175328549"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.734660 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e066ca3a-3c05-43c9-9466-688175328549-kube-api-access-zjg2l" (OuterVolumeSpecName: "kube-api-access-zjg2l") pod "e066ca3a-3c05-43c9-9466-688175328549" (UID: "e066ca3a-3c05-43c9-9466-688175328549"). InnerVolumeSpecName "kube-api-access-zjg2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.734660 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e066ca3a-3c05-43c9-9466-688175328549" (UID: "e066ca3a-3c05-43c9-9466-688175328549"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.744615 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-config-data" (OuterVolumeSpecName: "config-data") pod "e066ca3a-3c05-43c9-9466-688175328549" (UID: "e066ca3a-3c05-43c9-9466-688175328549"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.747390 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e066ca3a-3c05-43c9-9466-688175328549" (UID: "e066ca3a-3c05-43c9-9466-688175328549"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.822161 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.822192 4880 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.822201 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.822209 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.822218 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjg2l\" (UniqueName: \"kubernetes.io/projected/e066ca3a-3c05-43c9-9466-688175328549-kube-api-access-zjg2l\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:53 crc kubenswrapper[4880]: I1201 03:13:53.822226 4880 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e066ca3a-3c05-43c9-9466-688175328549-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.498032 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2bcnx" event={"ID":"e066ca3a-3c05-43c9-9466-688175328549","Type":"ContainerDied","Data":"f37e195706903339ed8f9922e70af321b5e1290602f266703e615085a6e4a44b"} Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.498357 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37e195706903339ed8f9922e70af321b5e1290602f266703e615085a6e4a44b" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.498092 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2bcnx" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.727225 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2bcnx"] Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.739370 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2bcnx"] Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.797944 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e066ca3a-3c05-43c9-9466-688175328549" path="/var/lib/kubelet/pods/e066ca3a-3c05-43c9-9466-688175328549/volumes" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.802026 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9vgls"] Dec 01 03:13:54 crc kubenswrapper[4880]: E1201 03:13:54.802390 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e066ca3a-3c05-43c9-9466-688175328549" containerName="keystone-bootstrap" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.802406 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e066ca3a-3c05-43c9-9466-688175328549" containerName="keystone-bootstrap" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.802572 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e066ca3a-3c05-43c9-9466-688175328549" containerName="keystone-bootstrap" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.803136 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.805497 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.805754 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.806010 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.806130 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hsw47" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.806428 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.818297 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9vgls"] Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.939697 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd2n\" (UniqueName: \"kubernetes.io/projected/a15fa76f-467b-485e-96b8-5fdec71318f5-kube-api-access-mqd2n\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.939851 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-config-data\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.940036 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-credential-keys\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.940185 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-scripts\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.940335 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-fernet-keys\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:54 crc kubenswrapper[4880]: I1201 03:13:54.940390 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-combined-ca-bundle\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.041523 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-config-data\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.041598 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-credential-keys\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.041644 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-scripts\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.041687 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-fernet-keys\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.041711 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-combined-ca-bundle\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.041751 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqd2n\" (UniqueName: \"kubernetes.io/projected/a15fa76f-467b-485e-96b8-5fdec71318f5-kube-api-access-mqd2n\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.057313 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-combined-ca-bundle\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.079260 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-scripts\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.079453 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-credential-keys\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.079682 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-config-data\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.081660 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqd2n\" (UniqueName: \"kubernetes.io/projected/a15fa76f-467b-485e-96b8-5fdec71318f5-kube-api-access-mqd2n\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.088338 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-fernet-keys\") pod \"keystone-bootstrap-9vgls\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:55 crc kubenswrapper[4880]: I1201 03:13:55.121899 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:13:56 crc kubenswrapper[4880]: I1201 03:13:56.572911 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.513439 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.513857 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.513982 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64dh569h668h64h74h59h54dh68dh99h5c9h687h58dh56dh664h5f6hddhd8h598h7bh57dh688h5b8h5d5h7ch67ch5d7hd6hfbhc7h658hd5h676q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78k4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-585764b957-fcwmr_openstack(f94a9fea-b4b8-4f53-8190-9ababbd17d49): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.517105 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"]" pod="openstack/horizon-585764b957-fcwmr" podUID="f94a9fea-b4b8-4f53-8190-9ababbd17d49" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.532782 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.532825 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.532953 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n547h664h5b9h687hf6h84h59h5c8h55hd9h76hcfh557h699hc5h67dh85h5bbh5f9h59bh65ch656hb9h575h66fhddh588h66ch668h55bh65ch7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9x4t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-798dcf488c-4k96z_openstack(0bd7c5af-516a-4215-8ff3-73c83a234c97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.540708 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"]" pod="openstack/horizon-798dcf488c-4k96z" podUID="0bd7c5af-516a-4215-8ff3-73c83a234c97" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.585060 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.585107 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.585226 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56h57fh67h684h77h575h574h644h64bhd4h5d8h5c7hfdh678h67ch645h68bh9fh67dhf7h54h9chc5h5bdhd7h5bdh546hd4h75h6chc4h58bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f22r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6b8bdd5655-fwp7z_openstack(ed6b0956-20fc-4437-a91e-d263641f40f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:13:59 crc kubenswrapper[4880]: I1201 03:13:59.588227 4880 generic.go:334] "Generic (PLEG): container finished" podID="e8042faf-fbbd-4bc0-9f82-6d077bb32a5d" containerID="b7a897e083947741245c6b8affa4e798206c5d4101e7dfae27863946a196d592" exitCode=0 Dec 01 03:13:59 crc kubenswrapper[4880]: I1201 03:13:59.588323 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8llx4" event={"ID":"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d","Type":"ContainerDied","Data":"b7a897e083947741245c6b8affa4e798206c5d4101e7dfae27863946a196d592"} Dec 01 03:13:59 crc kubenswrapper[4880]: E1201 03:13:59.591267 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"]" pod="openstack/horizon-6b8bdd5655-fwp7z" podUID="ed6b0956-20fc-4437-a91e-d263641f40f0" Dec 01 03:14:00 crc kubenswrapper[4880]: E1201 03:14:00.489802 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-barbican-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:14:00 crc kubenswrapper[4880]: E1201 03:14:00.490189 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-barbican-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:14:00 crc kubenswrapper[4880]: E1201 03:14:00.490311 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-barbican-api:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hd8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-zncgf_openstack(d3288e77-4e64-48d4-995e-93abe07bf1bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:14:00 crc kubenswrapper[4880]: E1201 03:14:00.491718 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-zncgf" podUID="d3288e77-4e64-48d4-995e-93abe07bf1bd" Dec 01 03:14:00 crc kubenswrapper[4880]: E1201 03:14:00.609211 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-barbican-api:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/barbican-db-sync-zncgf" podUID="d3288e77-4e64-48d4-995e-93abe07bf1bd" Dec 01 03:14:06 crc kubenswrapper[4880]: I1201 03:14:06.572201 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Dec 01 03:14:06 crc kubenswrapper[4880]: I1201 03:14:06.572897 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:14:07 crc kubenswrapper[4880]: I1201 03:14:07.148209 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 03:14:07 crc kubenswrapper[4880]: I1201 03:14:07.148257 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 03:14:07 crc kubenswrapper[4880]: I1201 03:14:07.393713 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:07 crc kubenswrapper[4880]: I1201 03:14:07.393756 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:11 crc kubenswrapper[4880]: I1201 03:14:11.572979 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.646279 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.650661 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.671500 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.699187 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.707672 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.711821 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8llx4" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.734183 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.750648 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"381e837e-ca4b-4b5a-8095-1f2894e75c58","Type":"ContainerDied","Data":"d7fabc67e65a753aef11dff5d9ff1e5cbbbdd127c0545917c99ccde9ab7b3740"} Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.750699 4880 scope.go:117] "RemoveContainer" containerID="59bb1ff9f013e5cd26f215930f6d7e49489a720a18c1eefc68dadfa194ee7603" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.750806 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.755017 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-585764b957-fcwmr" event={"ID":"f94a9fea-b4b8-4f53-8190-9ababbd17d49","Type":"ContainerDied","Data":"b3c02865a1b6497fb67ec9b5d5e71a8e9d45bba4955ad836646f00d8d10fa866"} Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.755070 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585764b957-fcwmr" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.756224 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b8bdd5655-fwp7z" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.756224 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b8bdd5655-fwp7z" event={"ID":"ed6b0956-20fc-4437-a91e-d263641f40f0","Type":"ContainerDied","Data":"07130ae9bcc8e6f71076a097e519a324577a8503e1c3ef3e3f27b09ade4be40d"} Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.758688 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d","Type":"ContainerDied","Data":"4008641adaebbd33da35a20b48ecad22e61114a7d0a922b67b01ec13a75f36b8"} Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.758923 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.772986 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f94a9fea-b4b8-4f53-8190-9ababbd17d49-logs\") pod \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773049 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-config\") pod \"7950bc21-2f03-4e16-a9e7-2c76a48078df\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773082 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-scripts\") pod \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773113 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-svc\") pod \"7950bc21-2f03-4e16-a9e7-2c76a48078df\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773150 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78k4f\" (UniqueName: \"kubernetes.io/projected/f94a9fea-b4b8-4f53-8190-9ababbd17d49-kube-api-access-78k4f\") pod \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773194 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-nb\") pod \"7950bc21-2f03-4e16-a9e7-2c76a48078df\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773208 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-sb\") pod \"7950bc21-2f03-4e16-a9e7-2c76a48078df\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773269 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kmzw\" (UniqueName: \"kubernetes.io/projected/7950bc21-2f03-4e16-a9e7-2c76a48078df-kube-api-access-4kmzw\") pod \"7950bc21-2f03-4e16-a9e7-2c76a48078df\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773269 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f94a9fea-b4b8-4f53-8190-9ababbd17d49-logs" (OuterVolumeSpecName: "logs") pod "f94a9fea-b4b8-4f53-8190-9ababbd17d49" (UID: "f94a9fea-b4b8-4f53-8190-9ababbd17d49"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773302 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-config-data\") pod \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773339 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-swift-storage-0\") pod \"7950bc21-2f03-4e16-a9e7-2c76a48078df\" (UID: \"7950bc21-2f03-4e16-a9e7-2c76a48078df\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773359 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f94a9fea-b4b8-4f53-8190-9ababbd17d49-horizon-secret-key\") pod \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\" (UID: \"f94a9fea-b4b8-4f53-8190-9ababbd17d49\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.773640 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f94a9fea-b4b8-4f53-8190-9ababbd17d49-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.781107 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" event={"ID":"7950bc21-2f03-4e16-a9e7-2c76a48078df","Type":"ContainerDied","Data":"8c868dead88eb8be052b1e3d87075c798e42ac5dbdc5b0b64b6acb50589ead00"} Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.781260 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.786991 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94a9fea-b4b8-4f53-8190-9ababbd17d49-kube-api-access-78k4f" (OuterVolumeSpecName: "kube-api-access-78k4f") pod "f94a9fea-b4b8-4f53-8190-9ababbd17d49" (UID: "f94a9fea-b4b8-4f53-8190-9ababbd17d49"). InnerVolumeSpecName "kube-api-access-78k4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.791606 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-config-data" (OuterVolumeSpecName: "config-data") pod "f94a9fea-b4b8-4f53-8190-9ababbd17d49" (UID: "f94a9fea-b4b8-4f53-8190-9ababbd17d49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.795020 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-scripts" (OuterVolumeSpecName: "scripts") pod "f94a9fea-b4b8-4f53-8190-9ababbd17d49" (UID: "f94a9fea-b4b8-4f53-8190-9ababbd17d49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.795264 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8llx4" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.799541 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-798dcf488c-4k96z" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.800072 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7950bc21-2f03-4e16-a9e7-2c76a48078df-kube-api-access-4kmzw" (OuterVolumeSpecName: "kube-api-access-4kmzw") pod "7950bc21-2f03-4e16-a9e7-2c76a48078df" (UID: "7950bc21-2f03-4e16-a9e7-2c76a48078df"). InnerVolumeSpecName "kube-api-access-4kmzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.807960 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f94a9fea-b4b8-4f53-8190-9ababbd17d49-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f94a9fea-b4b8-4f53-8190-9ababbd17d49" (UID: "f94a9fea-b4b8-4f53-8190-9ababbd17d49"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.856821 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7950bc21-2f03-4e16-a9e7-2c76a48078df" (UID: "7950bc21-2f03-4e16-a9e7-2c76a48078df"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.865346 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-config" (OuterVolumeSpecName: "config") pod "7950bc21-2f03-4e16-a9e7-2c76a48078df" (UID: "7950bc21-2f03-4e16-a9e7-2c76a48078df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.874443 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f22r9\" (UniqueName: \"kubernetes.io/projected/ed6b0956-20fc-4437-a91e-d263641f40f0-kube-api-access-f22r9\") pod \"ed6b0956-20fc-4437-a91e-d263641f40f0\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.874513 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"381e837e-ca4b-4b5a-8095-1f2894e75c58\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.874538 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdfvh\" (UniqueName: \"kubernetes.io/projected/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-kube-api-access-rdfvh\") pod \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.875351 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-config\") pod \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.875377 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-scripts\") pod \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.875396 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzv9t\" (UniqueName: \"kubernetes.io/projected/381e837e-ca4b-4b5a-8095-1f2894e75c58-kube-api-access-nzv9t\") pod \"381e837e-ca4b-4b5a-8095-1f2894e75c58\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.875940 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-logs\") pod \"381e837e-ca4b-4b5a-8095-1f2894e75c58\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.875988 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-httpd-run\") pod \"381e837e-ca4b-4b5a-8095-1f2894e75c58\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876009 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-scripts\") pod \"0bd7c5af-516a-4215-8ff3-73c83a234c97\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876027 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd7c5af-516a-4215-8ff3-73c83a234c97-logs\") pod \"0bd7c5af-516a-4215-8ff3-73c83a234c97\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876056 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed6b0956-20fc-4437-a91e-d263641f40f0-horizon-secret-key\") pod \"ed6b0956-20fc-4437-a91e-d263641f40f0\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876078 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876099 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-config-data\") pod \"0bd7c5af-516a-4215-8ff3-73c83a234c97\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876121 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-config-data\") pod \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876151 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-scripts\") pod \"ed6b0956-20fc-4437-a91e-d263641f40f0\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876179 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-config-data\") pod \"ed6b0956-20fc-4437-a91e-d263641f40f0\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876200 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2slw\" (UniqueName: \"kubernetes.io/projected/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-kube-api-access-w2slw\") pod \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876226 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-scripts\") pod \"381e837e-ca4b-4b5a-8095-1f2894e75c58\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876248 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-logs\") pod \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876267 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-public-tls-certs\") pod \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876284 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-combined-ca-bundle\") pod \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876301 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-httpd-run\") pod \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\" (UID: \"dbcb8b2d-b4f5-4382-a720-a8771cf56b2d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876322 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-combined-ca-bundle\") pod \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\" (UID: \"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876340 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-internal-tls-certs\") pod \"381e837e-ca4b-4b5a-8095-1f2894e75c58\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876356 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed6b0956-20fc-4437-a91e-d263641f40f0-logs\") pod \"ed6b0956-20fc-4437-a91e-d263641f40f0\" (UID: \"ed6b0956-20fc-4437-a91e-d263641f40f0\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876381 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-combined-ca-bundle\") pod \"381e837e-ca4b-4b5a-8095-1f2894e75c58\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876410 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0bd7c5af-516a-4215-8ff3-73c83a234c97-horizon-secret-key\") pod \"0bd7c5af-516a-4215-8ff3-73c83a234c97\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876429 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x4t2\" (UniqueName: \"kubernetes.io/projected/0bd7c5af-516a-4215-8ff3-73c83a234c97-kube-api-access-9x4t2\") pod \"0bd7c5af-516a-4215-8ff3-73c83a234c97\" (UID: \"0bd7c5af-516a-4215-8ff3-73c83a234c97\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876446 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-config-data\") pod \"381e837e-ca4b-4b5a-8095-1f2894e75c58\" (UID: \"381e837e-ca4b-4b5a-8095-1f2894e75c58\") " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876768 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876781 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876791 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78k4f\" (UniqueName: \"kubernetes.io/projected/f94a9fea-b4b8-4f53-8190-9ababbd17d49-kube-api-access-78k4f\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876801 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kmzw\" (UniqueName: \"kubernetes.io/projected/7950bc21-2f03-4e16-a9e7-2c76a48078df-kube-api-access-4kmzw\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876809 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f94a9fea-b4b8-4f53-8190-9ababbd17d49-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876817 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.876826 4880 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f94a9fea-b4b8-4f53-8190-9ababbd17d49-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.878168 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-logs" (OuterVolumeSpecName: "logs") pod "381e837e-ca4b-4b5a-8095-1f2894e75c58" (UID: "381e837e-ca4b-4b5a-8095-1f2894e75c58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.881578 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7950bc21-2f03-4e16-a9e7-2c76a48078df" (UID: "7950bc21-2f03-4e16-a9e7-2c76a48078df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.881580 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6b0956-20fc-4437-a91e-d263641f40f0-kube-api-access-f22r9" (OuterVolumeSpecName: "kube-api-access-f22r9") pod "ed6b0956-20fc-4437-a91e-d263641f40f0" (UID: "ed6b0956-20fc-4437-a91e-d263641f40f0"). InnerVolumeSpecName "kube-api-access-f22r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.882019 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-scripts" (OuterVolumeSpecName: "scripts") pod "ed6b0956-20fc-4437-a91e-d263641f40f0" (UID: "ed6b0956-20fc-4437-a91e-d263641f40f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.882279 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "381e837e-ca4b-4b5a-8095-1f2894e75c58" (UID: "381e837e-ca4b-4b5a-8095-1f2894e75c58"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.882292 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-logs" (OuterVolumeSpecName: "logs") pod "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" (UID: "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.882931 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-scripts" (OuterVolumeSpecName: "scripts") pod "0bd7c5af-516a-4215-8ff3-73c83a234c97" (UID: "0bd7c5af-516a-4215-8ff3-73c83a234c97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.883058 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "381e837e-ca4b-4b5a-8095-1f2894e75c58" (UID: "381e837e-ca4b-4b5a-8095-1f2894e75c58"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.883129 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-config-data" (OuterVolumeSpecName: "config-data") pod "ed6b0956-20fc-4437-a91e-d263641f40f0" (UID: "ed6b0956-20fc-4437-a91e-d263641f40f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.883395 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd7c5af-516a-4215-8ff3-73c83a234c97-logs" (OuterVolumeSpecName: "logs") pod "0bd7c5af-516a-4215-8ff3-73c83a234c97" (UID: "0bd7c5af-516a-4215-8ff3-73c83a234c97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.884136 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-kube-api-access-rdfvh" (OuterVolumeSpecName: "kube-api-access-rdfvh") pod "e8042faf-fbbd-4bc0-9f82-6d077bb32a5d" (UID: "e8042faf-fbbd-4bc0-9f82-6d077bb32a5d"). InnerVolumeSpecName "kube-api-access-rdfvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.885111 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed6b0956-20fc-4437-a91e-d263641f40f0-logs" (OuterVolumeSpecName: "logs") pod "ed6b0956-20fc-4437-a91e-d263641f40f0" (UID: "ed6b0956-20fc-4437-a91e-d263641f40f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.888651 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed6b0956-20fc-4437-a91e-d263641f40f0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ed6b0956-20fc-4437-a91e-d263641f40f0" (UID: "ed6b0956-20fc-4437-a91e-d263641f40f0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.890939 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" (UID: "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.891194 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-kube-api-access-w2slw" (OuterVolumeSpecName: "kube-api-access-w2slw") pod "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" (UID: "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d"). InnerVolumeSpecName "kube-api-access-w2slw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.891512 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-config-data" (OuterVolumeSpecName: "config-data") pod "0bd7c5af-516a-4215-8ff3-73c83a234c97" (UID: "0bd7c5af-516a-4215-8ff3-73c83a234c97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.892442 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/381e837e-ca4b-4b5a-8095-1f2894e75c58-kube-api-access-nzv9t" (OuterVolumeSpecName: "kube-api-access-nzv9t") pod "381e837e-ca4b-4b5a-8095-1f2894e75c58" (UID: "381e837e-ca4b-4b5a-8095-1f2894e75c58"). InnerVolumeSpecName "kube-api-access-nzv9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.893152 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" (UID: "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.895717 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-scripts" (OuterVolumeSpecName: "scripts") pod "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" (UID: "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.896448 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-scripts" (OuterVolumeSpecName: "scripts") pod "381e837e-ca4b-4b5a-8095-1f2894e75c58" (UID: "381e837e-ca4b-4b5a-8095-1f2894e75c58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.903115 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd7c5af-516a-4215-8ff3-73c83a234c97-kube-api-access-9x4t2" (OuterVolumeSpecName: "kube-api-access-9x4t2") pod "0bd7c5af-516a-4215-8ff3-73c83a234c97" (UID: "0bd7c5af-516a-4215-8ff3-73c83a234c97"). InnerVolumeSpecName "kube-api-access-9x4t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.903410 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7950bc21-2f03-4e16-a9e7-2c76a48078df" (UID: "7950bc21-2f03-4e16-a9e7-2c76a48078df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.912255 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bd7c5af-516a-4215-8ff3-73c83a234c97-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0bd7c5af-516a-4215-8ff3-73c83a234c97" (UID: "0bd7c5af-516a-4215-8ff3-73c83a234c97"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.916413 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" (UID: "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.920571 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-config" (OuterVolumeSpecName: "config") pod "e8042faf-fbbd-4bc0-9f82-6d077bb32a5d" (UID: "e8042faf-fbbd-4bc0-9f82-6d077bb32a5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.922127 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7950bc21-2f03-4e16-a9e7-2c76a48078df" (UID: "7950bc21-2f03-4e16-a9e7-2c76a48078df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.922139 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "381e837e-ca4b-4b5a-8095-1f2894e75c58" (UID: "381e837e-ca4b-4b5a-8095-1f2894e75c58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.932650 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8llx4" event={"ID":"e8042faf-fbbd-4bc0-9f82-6d077bb32a5d","Type":"ContainerDied","Data":"8f86d89ebe0adef0688d9491ba3b915da2a9de42d3932c42856f31c3c135c724"} Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.932794 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f86d89ebe0adef0688d9491ba3b915da2a9de42d3932c42856f31c3c135c724" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.932857 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-798dcf488c-4k96z" event={"ID":"0bd7c5af-516a-4215-8ff3-73c83a234c97","Type":"ContainerDied","Data":"15fc83ba9d4e6147ca865647b03a5b8f5570c861fd23c1c3407c5717b667ad44"} Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.940374 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8042faf-fbbd-4bc0-9f82-6d077bb32a5d" (UID: "e8042faf-fbbd-4bc0-9f82-6d077bb32a5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.941400 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "381e837e-ca4b-4b5a-8095-1f2894e75c58" (UID: "381e837e-ca4b-4b5a-8095-1f2894e75c58"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.948201 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-config-data" (OuterVolumeSpecName: "config-data") pod "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" (UID: "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.956749 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-config-data" (OuterVolumeSpecName: "config-data") pod "381e837e-ca4b-4b5a-8095-1f2894e75c58" (UID: "381e837e-ca4b-4b5a-8095-1f2894e75c58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.971220 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" (UID: "dbcb8b2d-b4f5-4382-a720-a8771cf56b2d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977574 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977605 4880 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/381e837e-ca4b-4b5a-8095-1f2894e75c58-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977614 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977622 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bd7c5af-516a-4215-8ff3-73c83a234c97-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977630 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977639 4880 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed6b0956-20fc-4437-a91e-d263641f40f0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977668 4880 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977677 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bd7c5af-516a-4215-8ff3-73c83a234c97-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977686 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977696 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977704 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977712 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7950bc21-2f03-4e16-a9e7-2c76a48078df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977719 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed6b0956-20fc-4437-a91e-d263641f40f0-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977728 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2slw\" (UniqueName: \"kubernetes.io/projected/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-kube-api-access-w2slw\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977736 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977743 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977752 4880 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977764 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977772 4880 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977779 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977793 4880 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977801 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed6b0956-20fc-4437-a91e-d263641f40f0-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977809 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977816 4880 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0bd7c5af-516a-4215-8ff3-73c83a234c97-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977826 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x4t2\" (UniqueName: \"kubernetes.io/projected/0bd7c5af-516a-4215-8ff3-73c83a234c97-kube-api-access-9x4t2\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977835 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/381e837e-ca4b-4b5a-8095-1f2894e75c58-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977844 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f22r9\" (UniqueName: \"kubernetes.io/projected/ed6b0956-20fc-4437-a91e-d263641f40f0-kube-api-access-f22r9\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977859 4880 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977878 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdfvh\" (UniqueName: \"kubernetes.io/projected/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-kube-api-access-rdfvh\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977887 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977895 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.977902 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzv9t\" (UniqueName: \"kubernetes.io/projected/381e837e-ca4b-4b5a-8095-1f2894e75c58-kube-api-access-nzv9t\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.993601 4880 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 01 03:14:14 crc kubenswrapper[4880]: I1201 03:14:14.995225 4880 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.080810 4880 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.080842 4880 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.157252 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b8bdd5655-fwp7z"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.189960 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b8bdd5655-fwp7z"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.212996 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.250560 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.274466 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.294202 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.302641 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.303041 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="dnsmasq-dns" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303062 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="dnsmasq-dns" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.303079 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerName="glance-httpd" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303092 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerName="glance-httpd" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.303120 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="init" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303127 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="init" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.303139 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8042faf-fbbd-4bc0-9f82-6d077bb32a5d" containerName="neutron-db-sync" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303146 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8042faf-fbbd-4bc0-9f82-6d077bb32a5d" containerName="neutron-db-sync" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.303155 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerName="glance-httpd" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303161 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerName="glance-httpd" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.303181 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerName="glance-log" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303187 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerName="glance-log" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.303195 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerName="glance-log" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303201 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerName="glance-log" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303415 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerName="glance-log" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303428 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="dnsmasq-dns" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303442 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="381e837e-ca4b-4b5a-8095-1f2894e75c58" containerName="glance-httpd" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303449 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8042faf-fbbd-4bc0-9f82-6d077bb32a5d" containerName="neutron-db-sync" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303462 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerName="glance-httpd" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.303474 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" containerName="glance-log" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.304606 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.307280 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.307453 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.307551 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kg2c4" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.307669 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.308557 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69456b8679-gnrn4"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.313671 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69456b8679-gnrn4"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.326557 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.340268 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.341813 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.344001 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.344152 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.350982 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-585764b957-fcwmr"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.365158 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-585764b957-fcwmr"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.382052 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.407158 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-798dcf488c-4k96z"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.416931 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-798dcf488c-4k96z"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487601 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-logs\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487681 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-scripts\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487715 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487732 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487747 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487783 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvrnk\" (UniqueName: \"kubernetes.io/projected/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-kube-api-access-xvrnk\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487801 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487824 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487842 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487860 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487890 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-config-data\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487906 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487922 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487943 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487966 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ljj\" (UniqueName: \"kubernetes.io/projected/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-kube-api-access-94ljj\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.487985 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-logs\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.519079 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-heat-engine:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.519353 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-heat-engine:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.519491 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-heat-engine:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gzwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-mgzm4_openstack(81ee6695-1440-4087-b17a-0af2371eceed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.520904 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-mgzm4" podUID="81ee6695-1440-4087-b17a-0af2371eceed" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589247 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94ljj\" (UniqueName: \"kubernetes.io/projected/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-kube-api-access-94ljj\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589292 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-logs\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589337 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-logs\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589401 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-scripts\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589432 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589451 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589484 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589509 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvrnk\" (UniqueName: \"kubernetes.io/projected/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-kube-api-access-xvrnk\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589526 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589560 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589580 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589596 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589611 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-config-data\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589642 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589656 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.589677 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.590784 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-logs\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.591236 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.592312 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.592446 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-logs\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.592684 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.593075 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.596427 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.608521 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.611083 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-scripts\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.614183 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.615267 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.615490 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvrnk\" (UniqueName: \"kubernetes.io/projected/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-kube-api-access-xvrnk\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.624549 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.641110 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.642450 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94ljj\" (UniqueName: \"kubernetes.io/projected/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-kube-api-access-94ljj\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.657815 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-config-data\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.662682 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.668196 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: E1201 03:14:15.807447 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-heat-engine:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/heat-db-sync-mgzm4" podUID="81ee6695-1440-4087-b17a-0af2371eceed" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.923161 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.958311 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.973255 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-577f8db8c5-k8vsc"] Dec 01 03:14:15 crc kubenswrapper[4880]: I1201 03:14:15.979882 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.024331 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-577f8db8c5-k8vsc"] Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.109142 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-svc\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.109318 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-swift-storage-0\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.109432 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-nb\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.109466 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-config\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.109563 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fps64\" (UniqueName: \"kubernetes.io/projected/8826ba7a-b62d-4a95-b97c-4bab19f08919-kube-api-access-fps64\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.109591 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-sb\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.131251 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7877878478-zq76n"] Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.137783 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.140721 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ljj48" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.140940 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.142227 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.142380 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.188672 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7877878478-zq76n"] Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.216803 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-svc\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.216912 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-swift-storage-0\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.216957 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-nb\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.216978 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-config\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.217022 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fps64\" (UniqueName: \"kubernetes.io/projected/8826ba7a-b62d-4a95-b97c-4bab19f08919-kube-api-access-fps64\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.217041 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-sb\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.217989 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-swift-storage-0\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.222247 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-sb\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.242590 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-config\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.248330 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-nb\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.250089 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-svc\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.255407 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fps64\" (UniqueName: \"kubernetes.io/projected/8826ba7a-b62d-4a95-b97c-4bab19f08919-kube-api-access-fps64\") pod \"dnsmasq-dns-577f8db8c5-k8vsc\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.321238 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-httpd-config\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.321295 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-combined-ca-bundle\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.321338 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qln9\" (UniqueName: \"kubernetes.io/projected/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-kube-api-access-5qln9\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.321368 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-ovndb-tls-certs\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.321421 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-config\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.324818 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.425069 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-ovndb-tls-certs\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.425206 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-config\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.425343 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-httpd-config\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.425410 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-combined-ca-bundle\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.425927 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qln9\" (UniqueName: \"kubernetes.io/projected/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-kube-api-access-5qln9\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.435733 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-config\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.440784 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-combined-ca-bundle\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.456837 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-httpd-config\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.457638 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-ovndb-tls-certs\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.483464 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qln9\" (UniqueName: \"kubernetes.io/projected/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-kube-api-access-5qln9\") pod \"neutron-7877878478-zq76n\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.573751 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69456b8679-gnrn4" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.767799 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.798687 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd7c5af-516a-4215-8ff3-73c83a234c97" path="/var/lib/kubelet/pods/0bd7c5af-516a-4215-8ff3-73c83a234c97/volumes" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.801628 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="381e837e-ca4b-4b5a-8095-1f2894e75c58" path="/var/lib/kubelet/pods/381e837e-ca4b-4b5a-8095-1f2894e75c58/volumes" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.802449 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7950bc21-2f03-4e16-a9e7-2c76a48078df" path="/var/lib/kubelet/pods/7950bc21-2f03-4e16-a9e7-2c76a48078df/volumes" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.803151 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbcb8b2d-b4f5-4382-a720-a8771cf56b2d" path="/var/lib/kubelet/pods/dbcb8b2d-b4f5-4382-a720-a8771cf56b2d/volumes" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.804822 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6b0956-20fc-4437-a91e-d263641f40f0" path="/var/lib/kubelet/pods/ed6b0956-20fc-4437-a91e-d263641f40f0/volumes" Dec 01 03:14:16 crc kubenswrapper[4880]: I1201 03:14:16.805392 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f94a9fea-b4b8-4f53-8190-9ababbd17d49" path="/var/lib/kubelet/pods/f94a9fea-b4b8-4f53-8190-9ababbd17d49/volumes" Dec 01 03:14:17 crc kubenswrapper[4880]: I1201 03:14:17.368622 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:14:17 crc kubenswrapper[4880]: I1201 03:14:17.368696 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.133812 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d47fdc4c7-xl94f"] Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.136537 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.138828 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.142498 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.152116 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d47fdc4c7-xl94f"] Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.280815 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-combined-ca-bundle\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.281071 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-httpd-config\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.281427 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjhpq\" (UniqueName: \"kubernetes.io/projected/3b132fb3-f361-4b31-a0b7-73af662a12a6-kube-api-access-sjhpq\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.281520 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-internal-tls-certs\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.281589 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-ovndb-tls-certs\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.281668 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-config\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.281716 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-public-tls-certs\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.383723 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-combined-ca-bundle\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.384068 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-httpd-config\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.384306 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjhpq\" (UniqueName: \"kubernetes.io/projected/3b132fb3-f361-4b31-a0b7-73af662a12a6-kube-api-access-sjhpq\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.384509 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-internal-tls-certs\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.384640 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-ovndb-tls-certs\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.384779 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-config\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.384936 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-public-tls-certs\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.390420 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-internal-tls-certs\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.391338 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-httpd-config\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.393234 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-config\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.394332 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-public-tls-certs\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.415769 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-ovndb-tls-certs\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.416223 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjhpq\" (UniqueName: \"kubernetes.io/projected/3b132fb3-f361-4b31-a0b7-73af662a12a6-kube-api-access-sjhpq\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.429646 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-combined-ca-bundle\") pod \"neutron-d47fdc4c7-xl94f\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:19 crc kubenswrapper[4880]: I1201 03:14:19.457330 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:20 crc kubenswrapper[4880]: E1201 03:14:20.761455 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-cinder-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:14:20 crc kubenswrapper[4880]: E1201 03:14:20.761766 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-cinder-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:14:20 crc kubenswrapper[4880]: E1201 03:14:20.761935 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-cinder-api:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcvhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-lczzw_openstack(fe59d4ff-1b09-4404-a45d-4b2b73e3ac31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:14:20 crc kubenswrapper[4880]: E1201 03:14:20.765381 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-lczzw" podUID="fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" Dec 01 03:14:20 crc kubenswrapper[4880]: E1201 03:14:20.859189 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-cinder-api:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/cinder-db-sync-lczzw" podUID="fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.270612 4880 scope.go:117] "RemoveContainer" containerID="2d6185e84047674b4a2602852cc3be2cbcc128820fa83ccde51b2537064f3783" Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.449655 4880 scope.go:117] "RemoveContainer" containerID="414819b743e4d62bce116c64ce9d9b0648de297b47b867cb07f9b57ea476e794" Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.489717 4880 scope.go:117] "RemoveContainer" containerID="e56af5014235d8a8bdbbcfab205a7286265b867d6b0dd3bb36d7eef94448d3fb" Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.541325 4880 scope.go:117] "RemoveContainer" containerID="5171f871e6d95428c049012148c9a0c79198b7125214c960436f3ea2c2af2217" Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.653503 4880 scope.go:117] "RemoveContainer" containerID="35d7a3aac11a41c678640d39e8560a7ebfa91494e3f062b9fe2474f349f75820" Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.728495 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56cc96959b-rrjz7"] Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.742645 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6ddc7fc844-5qd9h"] Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.883325 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gxfjq" event={"ID":"fb21b71d-303a-4e92-9086-789ded0f11fa","Type":"ContainerStarted","Data":"217162db11bdf228ddd8041b41bf9ff03ce0a43a77ef72edcbc5ede8c120fbf5"} Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.886666 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ddc7fc844-5qd9h" event={"ID":"182db9c6-4756-4acb-a228-a1fe3fe7a4dd","Type":"ContainerStarted","Data":"92054959502be3c0ce330e1a0475d8371e7ca9a6b7e4dd77ecf8476f920cd048"} Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.889462 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"010f41a5-3ac7-48d3-b20c-e9b8add221ca","Type":"ContainerStarted","Data":"8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35"} Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.894181 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56cc96959b-rrjz7" event={"ID":"24a10152-f651-41de-9680-872d96690cd5","Type":"ContainerStarted","Data":"487cdd40b662281d87c32cdd30b9db514bf8df0f39c1e7460f6c33ed299957ba"} Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.909973 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-gxfjq" podStartSLOduration=11.142601027 podStartE2EDuration="46.909958668s" podCreationTimestamp="2025-12-01 03:13:35 +0000 UTC" firstStartedPulling="2025-12-01 03:13:38.74198155 +0000 UTC m=+1048.253235922" lastFinishedPulling="2025-12-01 03:14:14.509339161 +0000 UTC m=+1084.020593563" observedRunningTime="2025-12-01 03:14:21.903519456 +0000 UTC m=+1091.414773828" watchObservedRunningTime="2025-12-01 03:14:21.909958668 +0000 UTC m=+1091.421213040" Dec 01 03:14:21 crc kubenswrapper[4880]: I1201 03:14:21.911884 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9vgls"] Dec 01 03:14:22 crc kubenswrapper[4880]: I1201 03:14:22.002429 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 01 03:14:22 crc kubenswrapper[4880]: I1201 03:14:22.147622 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:14:22 crc kubenswrapper[4880]: I1201 03:14:22.183104 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-577f8db8c5-k8vsc"] Dec 01 03:14:22 crc kubenswrapper[4880]: I1201 03:14:22.223243 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:14:22 crc kubenswrapper[4880]: W1201 03:14:22.226447 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8826ba7a_b62d_4a95_b97c_4bab19f08919.slice/crio-bc71223837848571a1d6d9fed123299293e008af73d611c85775ea3779e1bcdb WatchSource:0}: Error finding container bc71223837848571a1d6d9fed123299293e008af73d611c85775ea3779e1bcdb: Status 404 returned error can't find the container with id bc71223837848571a1d6d9fed123299293e008af73d611c85775ea3779e1bcdb Dec 01 03:14:22 crc kubenswrapper[4880]: I1201 03:14:22.458802 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d47fdc4c7-xl94f"] Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.013212 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56cc96959b-rrjz7" event={"ID":"24a10152-f651-41de-9680-872d96690cd5","Type":"ContainerStarted","Data":"d48b4e5b48bae1061c95778829126aeacc2e919e5babe0eb192f1398b3cc5ed3"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.013612 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56cc96959b-rrjz7" event={"ID":"24a10152-f651-41de-9680-872d96690cd5","Type":"ContainerStarted","Data":"6d225da0b1dc9c308f705f536a96afa7a31d716c7c750115324a6b864fbbb2a4"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.018727 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zncgf" event={"ID":"d3288e77-4e64-48d4-995e-93abe07bf1bd","Type":"ContainerStarted","Data":"eac99c63e989f554cebf3d5f8c8b31b2bddeb7c6171daeb42033b08fe99911e8"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.026865 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d47fdc4c7-xl94f" event={"ID":"3b132fb3-f361-4b31-a0b7-73af662a12a6","Type":"ContainerStarted","Data":"12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.027341 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d47fdc4c7-xl94f" event={"ID":"3b132fb3-f361-4b31-a0b7-73af662a12a6","Type":"ContainerStarted","Data":"dab72ee89928ae1f1241e3691435fb27d7fd2f8961eb30f775a5783621825f4d"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.039642 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vgls" event={"ID":"a15fa76f-467b-485e-96b8-5fdec71318f5","Type":"ContainerStarted","Data":"5476647e93f626d6147fb6155e31427e9ea0f6df268694de98f70d8b9f9f1c1c"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.039685 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vgls" event={"ID":"a15fa76f-467b-485e-96b8-5fdec71318f5","Type":"ContainerStarted","Data":"4736d3b2e9654701a078567db5505db5f0e0c3f5d1a3ebd3ba3ce51aaffa825a"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.049443 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56cc96959b-rrjz7" podStartSLOduration=37.928992338 podStartE2EDuration="38.049418393s" podCreationTimestamp="2025-12-01 03:13:45 +0000 UTC" firstStartedPulling="2025-12-01 03:14:21.768303349 +0000 UTC m=+1091.279557721" lastFinishedPulling="2025-12-01 03:14:21.888729404 +0000 UTC m=+1091.399983776" observedRunningTime="2025-12-01 03:14:23.032610811 +0000 UTC m=+1092.543865183" watchObservedRunningTime="2025-12-01 03:14:23.049418393 +0000 UTC m=+1092.560672765" Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.064584 4880 generic.go:334] "Generic (PLEG): container finished" podID="8826ba7a-b62d-4a95-b97c-4bab19f08919" containerID="1b94c7675392d0818d2b5dc2adae225ae9a3eff6f041f18f30338ebc707c2db6" exitCode=0 Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.064645 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" event={"ID":"8826ba7a-b62d-4a95-b97c-4bab19f08919","Type":"ContainerDied","Data":"1b94c7675392d0818d2b5dc2adae225ae9a3eff6f041f18f30338ebc707c2db6"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.064667 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" event={"ID":"8826ba7a-b62d-4a95-b97c-4bab19f08919","Type":"ContainerStarted","Data":"bc71223837848571a1d6d9fed123299293e008af73d611c85775ea3779e1bcdb"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.071633 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zncgf" podStartSLOduration=5.310637738 podStartE2EDuration="48.071621111s" podCreationTimestamp="2025-12-01 03:13:35 +0000 UTC" firstStartedPulling="2025-12-01 03:13:38.708782636 +0000 UTC m=+1048.220037008" lastFinishedPulling="2025-12-01 03:14:21.469766009 +0000 UTC m=+1090.981020381" observedRunningTime="2025-12-01 03:14:23.069907138 +0000 UTC m=+1092.581161500" watchObservedRunningTime="2025-12-01 03:14:23.071621111 +0000 UTC m=+1092.582875483" Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.089098 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ddc7fc844-5qd9h" event={"ID":"182db9c6-4756-4acb-a228-a1fe3fe7a4dd","Type":"ContainerStarted","Data":"ca4abb4a90b26185324b9145545abeafcf27374b78455ab6064a09cf34a460ca"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.089132 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ddc7fc844-5qd9h" event={"ID":"182db9c6-4756-4acb-a228-a1fe3fe7a4dd","Type":"ContainerStarted","Data":"a21857edb278cf8f3e444c37932515dbca958eaea72385984220acbeafa3688d"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.105528 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39","Type":"ContainerStarted","Data":"6dc94bd57d6322bebd89779f140ddfaaf30910dcc3ed0cfe6d660927f793dda0"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.107702 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de","Type":"ContainerStarted","Data":"c80885deee2b1bbce85915877c215dd4fdbbe14938a4347cd26425aaeac5ee74"} Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.142348 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9vgls" podStartSLOduration=29.142331907 podStartE2EDuration="29.142331907s" podCreationTimestamp="2025-12-01 03:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:23.137305511 +0000 UTC m=+1092.648559893" watchObservedRunningTime="2025-12-01 03:14:23.142331907 +0000 UTC m=+1092.653586279" Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.177541 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6ddc7fc844-5qd9h" podStartSLOduration=39.046214172 podStartE2EDuration="39.177519311s" podCreationTimestamp="2025-12-01 03:13:44 +0000 UTC" firstStartedPulling="2025-12-01 03:14:21.760697068 +0000 UTC m=+1091.271951430" lastFinishedPulling="2025-12-01 03:14:21.892002197 +0000 UTC m=+1091.403256569" observedRunningTime="2025-12-01 03:14:23.16074668 +0000 UTC m=+1092.672001042" watchObservedRunningTime="2025-12-01 03:14:23.177519311 +0000 UTC m=+1092.688773683" Dec 01 03:14:23 crc kubenswrapper[4880]: I1201 03:14:23.458695 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7877878478-zq76n"] Dec 01 03:14:23 crc kubenswrapper[4880]: W1201 03:14:23.480326 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf883fbd2_c0ad_4d3e_a56d_c99c361b6439.slice/crio-e0933fa61e5438d02c8cc46902fc4ab36b34544e4a9e5027ed3ff15e64401671 WatchSource:0}: Error finding container e0933fa61e5438d02c8cc46902fc4ab36b34544e4a9e5027ed3ff15e64401671: Status 404 returned error can't find the container with id e0933fa61e5438d02c8cc46902fc4ab36b34544e4a9e5027ed3ff15e64401671 Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.129481 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" event={"ID":"8826ba7a-b62d-4a95-b97c-4bab19f08919","Type":"ContainerStarted","Data":"798b8ca8a8317acd9e9448723b1682ec624edcc5f5ea1c2d99194ec80ed48c7d"} Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.130249 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.134767 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39","Type":"ContainerStarted","Data":"eb51283a0e8691b394f87355ab890692736063b68e69d530ad40b7746aab5d28"} Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.138530 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de","Type":"ContainerStarted","Data":"ddce1a415abf9040e0ac23833658699ba84695c544be7d0c19978a697da91b13"} Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.149813 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877878478-zq76n" event={"ID":"f883fbd2-c0ad-4d3e-a56d-c99c361b6439","Type":"ContainerStarted","Data":"77e9e062b9ae6444a7b89c7660df7c4cac97fb89df8a998362c21a097b2908f7"} Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.149915 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877878478-zq76n" event={"ID":"f883fbd2-c0ad-4d3e-a56d-c99c361b6439","Type":"ContainerStarted","Data":"e0933fa61e5438d02c8cc46902fc4ab36b34544e4a9e5027ed3ff15e64401671"} Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.151857 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" podStartSLOduration=9.151839928 podStartE2EDuration="9.151839928s" podCreationTimestamp="2025-12-01 03:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:24.150127135 +0000 UTC m=+1093.661381507" watchObservedRunningTime="2025-12-01 03:14:24.151839928 +0000 UTC m=+1093.663094300" Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.176501 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d47fdc4c7-xl94f" event={"ID":"3b132fb3-f361-4b31-a0b7-73af662a12a6","Type":"ContainerStarted","Data":"4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617"} Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.177247 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:24 crc kubenswrapper[4880]: I1201 03:14:24.209352 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d47fdc4c7-xl94f" podStartSLOduration=5.209308612 podStartE2EDuration="5.209308612s" podCreationTimestamp="2025-12-01 03:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:24.201119856 +0000 UTC m=+1093.712374228" watchObservedRunningTime="2025-12-01 03:14:24.209308612 +0000 UTC m=+1093.720562984" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.191635 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39","Type":"ContainerStarted","Data":"315f976ef09fb985020ef772d3f5bbfb7037c0dc122146af2ac3486e60af9d21"} Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.194437 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de","Type":"ContainerStarted","Data":"0404c3bbd5eeb73d60c7b1258d7e7601e35d3db5a09b65495772b561ddbf142e"} Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.202155 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877878478-zq76n" event={"ID":"f883fbd2-c0ad-4d3e-a56d-c99c361b6439","Type":"ContainerStarted","Data":"380169dcb254095b355faf5dd1dfa0d266524fd767be92ab5d1b06667b934dfc"} Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.202194 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.218381 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.21836186 podStartE2EDuration="10.21836186s" podCreationTimestamp="2025-12-01 03:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:25.207263341 +0000 UTC m=+1094.718517713" watchObservedRunningTime="2025-12-01 03:14:25.21836186 +0000 UTC m=+1094.729616232" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.234142 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.234128366 podStartE2EDuration="10.234128366s" podCreationTimestamp="2025-12-01 03:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:25.232206008 +0000 UTC m=+1094.743460380" watchObservedRunningTime="2025-12-01 03:14:25.234128366 +0000 UTC m=+1094.745382738" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.276444 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.277140 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.440827 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.440933 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.923665 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.923736 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.961002 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.961062 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.970147 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 03:14:25 crc kubenswrapper[4880]: I1201 03:14:25.997524 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7877878478-zq76n" podStartSLOduration=9.997501784 podStartE2EDuration="9.997501784s" podCreationTimestamp="2025-12-01 03:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:25.266899799 +0000 UTC m=+1094.778154181" watchObservedRunningTime="2025-12-01 03:14:25.997501784 +0000 UTC m=+1095.508756156" Dec 01 03:14:26 crc kubenswrapper[4880]: I1201 03:14:26.016278 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:26 crc kubenswrapper[4880]: I1201 03:14:26.024638 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 03:14:26 crc kubenswrapper[4880]: I1201 03:14:26.025369 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:26 crc kubenswrapper[4880]: I1201 03:14:26.213533 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:26 crc kubenswrapper[4880]: I1201 03:14:26.213907 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 03:14:26 crc kubenswrapper[4880]: I1201 03:14:26.213939 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:26 crc kubenswrapper[4880]: I1201 03:14:26.213949 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 03:14:27 crc kubenswrapper[4880]: I1201 03:14:27.226139 4880 generic.go:334] "Generic (PLEG): container finished" podID="fb21b71d-303a-4e92-9086-789ded0f11fa" containerID="217162db11bdf228ddd8041b41bf9ff03ce0a43a77ef72edcbc5ede8c120fbf5" exitCode=0 Dec 01 03:14:27 crc kubenswrapper[4880]: I1201 03:14:27.226172 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gxfjq" event={"ID":"fb21b71d-303a-4e92-9086-789ded0f11fa","Type":"ContainerDied","Data":"217162db11bdf228ddd8041b41bf9ff03ce0a43a77ef72edcbc5ede8c120fbf5"} Dec 01 03:14:28 crc kubenswrapper[4880]: E1201 03:14:28.051256 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15fa76f_467b_485e_96b8_5fdec71318f5.slice/crio-conmon-5476647e93f626d6147fb6155e31427e9ea0f6df268694de98f70d8b9f9f1c1c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15fa76f_467b_485e_96b8_5fdec71318f5.slice/crio-5476647e93f626d6147fb6155e31427e9ea0f6df268694de98f70d8b9f9f1c1c.scope\": RecentStats: unable to find data in memory cache]" Dec 01 03:14:28 crc kubenswrapper[4880]: I1201 03:14:28.236194 4880 generic.go:334] "Generic (PLEG): container finished" podID="d3288e77-4e64-48d4-995e-93abe07bf1bd" containerID="eac99c63e989f554cebf3d5f8c8b31b2bddeb7c6171daeb42033b08fe99911e8" exitCode=0 Dec 01 03:14:28 crc kubenswrapper[4880]: I1201 03:14:28.236371 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zncgf" event={"ID":"d3288e77-4e64-48d4-995e-93abe07bf1bd","Type":"ContainerDied","Data":"eac99c63e989f554cebf3d5f8c8b31b2bddeb7c6171daeb42033b08fe99911e8"} Dec 01 03:14:28 crc kubenswrapper[4880]: I1201 03:14:28.239684 4880 generic.go:334] "Generic (PLEG): container finished" podID="a15fa76f-467b-485e-96b8-5fdec71318f5" containerID="5476647e93f626d6147fb6155e31427e9ea0f6df268694de98f70d8b9f9f1c1c" exitCode=0 Dec 01 03:14:28 crc kubenswrapper[4880]: I1201 03:14:28.239850 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vgls" event={"ID":"a15fa76f-467b-485e-96b8-5fdec71318f5","Type":"ContainerDied","Data":"5476647e93f626d6147fb6155e31427e9ea0f6df268694de98f70d8b9f9f1c1c"} Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.328685 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.448960 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8669fc467f-m9rgg"] Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.449243 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" podUID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" containerName="dnsmasq-dns" containerID="cri-o://f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183" gracePeriod=10 Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.668002 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" podUID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.707031 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zncgf" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.707847 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.734902 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gxfjq" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744010 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-db-sync-config-data\") pod \"d3288e77-4e64-48d4-995e-93abe07bf1bd\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744048 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-config-data\") pod \"fb21b71d-303a-4e92-9086-789ded0f11fa\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744083 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-scripts\") pod \"a15fa76f-467b-485e-96b8-5fdec71318f5\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744107 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb21b71d-303a-4e92-9086-789ded0f11fa-logs\") pod \"fb21b71d-303a-4e92-9086-789ded0f11fa\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744134 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-scripts\") pod \"fb21b71d-303a-4e92-9086-789ded0f11fa\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744196 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hd8t\" (UniqueName: \"kubernetes.io/projected/d3288e77-4e64-48d4-995e-93abe07bf1bd-kube-api-access-4hd8t\") pod \"d3288e77-4e64-48d4-995e-93abe07bf1bd\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744211 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-config-data\") pod \"a15fa76f-467b-485e-96b8-5fdec71318f5\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744253 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-credential-keys\") pod \"a15fa76f-467b-485e-96b8-5fdec71318f5\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744282 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-fernet-keys\") pod \"a15fa76f-467b-485e-96b8-5fdec71318f5\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744319 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-combined-ca-bundle\") pod \"d3288e77-4e64-48d4-995e-93abe07bf1bd\" (UID: \"d3288e77-4e64-48d4-995e-93abe07bf1bd\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744355 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqcxx\" (UniqueName: \"kubernetes.io/projected/fb21b71d-303a-4e92-9086-789ded0f11fa-kube-api-access-sqcxx\") pod \"fb21b71d-303a-4e92-9086-789ded0f11fa\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744395 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-combined-ca-bundle\") pod \"a15fa76f-467b-485e-96b8-5fdec71318f5\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744433 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-combined-ca-bundle\") pod \"fb21b71d-303a-4e92-9086-789ded0f11fa\" (UID: \"fb21b71d-303a-4e92-9086-789ded0f11fa\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.744466 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqd2n\" (UniqueName: \"kubernetes.io/projected/a15fa76f-467b-485e-96b8-5fdec71318f5-kube-api-access-mqd2n\") pod \"a15fa76f-467b-485e-96b8-5fdec71318f5\" (UID: \"a15fa76f-467b-485e-96b8-5fdec71318f5\") " Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.768918 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb21b71d-303a-4e92-9086-789ded0f11fa-logs" (OuterVolumeSpecName: "logs") pod "fb21b71d-303a-4e92-9086-789ded0f11fa" (UID: "fb21b71d-303a-4e92-9086-789ded0f11fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.799461 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d3288e77-4e64-48d4-995e-93abe07bf1bd" (UID: "d3288e77-4e64-48d4-995e-93abe07bf1bd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.814504 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-scripts" (OuterVolumeSpecName: "scripts") pod "a15fa76f-467b-485e-96b8-5fdec71318f5" (UID: "a15fa76f-467b-485e-96b8-5fdec71318f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.814718 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15fa76f-467b-485e-96b8-5fdec71318f5-kube-api-access-mqd2n" (OuterVolumeSpecName: "kube-api-access-mqd2n") pod "a15fa76f-467b-485e-96b8-5fdec71318f5" (UID: "a15fa76f-467b-485e-96b8-5fdec71318f5"). InnerVolumeSpecName "kube-api-access-mqd2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.838335 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-scripts" (OuterVolumeSpecName: "scripts") pod "fb21b71d-303a-4e92-9086-789ded0f11fa" (UID: "fb21b71d-303a-4e92-9086-789ded0f11fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.847974 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqd2n\" (UniqueName: \"kubernetes.io/projected/a15fa76f-467b-485e-96b8-5fdec71318f5-kube-api-access-mqd2n\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.847998 4880 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.848007 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.848015 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb21b71d-303a-4e92-9086-789ded0f11fa-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.848025 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.852391 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-config-data" (OuterVolumeSpecName: "config-data") pod "a15fa76f-467b-485e-96b8-5fdec71318f5" (UID: "a15fa76f-467b-485e-96b8-5fdec71318f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.853908 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb21b71d-303a-4e92-9086-789ded0f11fa-kube-api-access-sqcxx" (OuterVolumeSpecName: "kube-api-access-sqcxx") pod "fb21b71d-303a-4e92-9086-789ded0f11fa" (UID: "fb21b71d-303a-4e92-9086-789ded0f11fa"). InnerVolumeSpecName "kube-api-access-sqcxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.854089 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3288e77-4e64-48d4-995e-93abe07bf1bd-kube-api-access-4hd8t" (OuterVolumeSpecName: "kube-api-access-4hd8t") pod "d3288e77-4e64-48d4-995e-93abe07bf1bd" (UID: "d3288e77-4e64-48d4-995e-93abe07bf1bd"). InnerVolumeSpecName "kube-api-access-4hd8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.867796 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a15fa76f-467b-485e-96b8-5fdec71318f5" (UID: "a15fa76f-467b-485e-96b8-5fdec71318f5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.901297 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a15fa76f-467b-485e-96b8-5fdec71318f5" (UID: "a15fa76f-467b-485e-96b8-5fdec71318f5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.921398 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-config-data" (OuterVolumeSpecName: "config-data") pod "fb21b71d-303a-4e92-9086-789ded0f11fa" (UID: "fb21b71d-303a-4e92-9086-789ded0f11fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.938067 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3288e77-4e64-48d4-995e-93abe07bf1bd" (UID: "d3288e77-4e64-48d4-995e-93abe07bf1bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.957840 4880 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.973539 4880 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.973575 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3288e77-4e64-48d4-995e-93abe07bf1bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.973590 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqcxx\" (UniqueName: \"kubernetes.io/projected/fb21b71d-303a-4e92-9086-789ded0f11fa-kube-api-access-sqcxx\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.973602 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.973614 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hd8t\" (UniqueName: \"kubernetes.io/projected/d3288e77-4e64-48d4-995e-93abe07bf1bd-kube-api-access-4hd8t\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:31 crc kubenswrapper[4880]: I1201 03:14:31.973623 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.046395 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb21b71d-303a-4e92-9086-789ded0f11fa" (UID: "fb21b71d-303a-4e92-9086-789ded0f11fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.074658 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb21b71d-303a-4e92-9086-789ded0f11fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.124309 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a15fa76f-467b-485e-96b8-5fdec71318f5" (UID: "a15fa76f-467b-485e-96b8-5fdec71318f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.176170 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15fa76f-467b-485e-96b8-5fdec71318f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.270582 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.316752 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9vgls" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.316952 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9vgls" event={"ID":"a15fa76f-467b-485e-96b8-5fdec71318f5","Type":"ContainerDied","Data":"4736d3b2e9654701a078567db5505db5f0e0c3f5d1a3ebd3ba3ce51aaffa825a"} Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.316988 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4736d3b2e9654701a078567db5505db5f0e0c3f5d1a3ebd3ba3ce51aaffa825a" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.378354 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-sb\") pod \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.378402 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-config\") pod \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.378432 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-svc\") pod \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.378552 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgpdt\" (UniqueName: \"kubernetes.io/projected/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-kube-api-access-bgpdt\") pod \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.378590 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-nb\") pod \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.378638 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-swift-storage-0\") pod \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\" (UID: \"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9\") " Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.397287 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgzm4" event={"ID":"81ee6695-1440-4087-b17a-0af2371eceed","Type":"ContainerStarted","Data":"303e097529623c44ececcbd246b782561ac6ab40f4a2853928bd50f834c952f0"} Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.415272 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"010f41a5-3ac7-48d3-b20c-e9b8add221ca","Type":"ContainerStarted","Data":"d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c"} Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.433000 4880 generic.go:334] "Generic (PLEG): container finished" podID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" containerID="f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183" exitCode=0 Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.433066 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" event={"ID":"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9","Type":"ContainerDied","Data":"f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183"} Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.433091 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" event={"ID":"6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9","Type":"ContainerDied","Data":"e4b2e7ff44b97dc1d2f69d4c21b541fef511b27fcc837c80bb98ccacdcce3e36"} Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.433107 4880 scope.go:117] "RemoveContainer" containerID="f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.433228 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8669fc467f-m9rgg" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.473080 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-kube-api-access-bgpdt" (OuterVolumeSpecName: "kube-api-access-bgpdt") pod "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" (UID: "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9"). InnerVolumeSpecName "kube-api-access-bgpdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.475717 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-mgzm4" podStartSLOduration=4.528106174 podStartE2EDuration="57.475699424s" podCreationTimestamp="2025-12-01 03:13:35 +0000 UTC" firstStartedPulling="2025-12-01 03:13:38.562384038 +0000 UTC m=+1048.073638400" lastFinishedPulling="2025-12-01 03:14:31.509977278 +0000 UTC m=+1101.021231650" observedRunningTime="2025-12-01 03:14:32.449215895 +0000 UTC m=+1101.960470267" watchObservedRunningTime="2025-12-01 03:14:32.475699424 +0000 UTC m=+1101.986953796" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.483297 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zncgf" event={"ID":"d3288e77-4e64-48d4-995e-93abe07bf1bd","Type":"ContainerDied","Data":"f2447e9796430323003463ed257fc3e6a05173ec2c030486c13ff35cc0c9eb37"} Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.483443 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2447e9796430323003463ed257fc3e6a05173ec2c030486c13ff35cc0c9eb37" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.483604 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zncgf" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.488776 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgpdt\" (UniqueName: \"kubernetes.io/projected/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-kube-api-access-bgpdt\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.507111 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gxfjq" event={"ID":"fb21b71d-303a-4e92-9086-789ded0f11fa","Type":"ContainerDied","Data":"d7388ef2b1420b0f36a4f6bec9fc3723b76a63b2a3ebb038fb07af043c6dd0bb"} Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.507339 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7388ef2b1420b0f36a4f6bec9fc3723b76a63b2a3ebb038fb07af043c6dd0bb" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.507468 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gxfjq" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.540160 4880 scope.go:117] "RemoveContainer" containerID="7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.554360 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-config" (OuterVolumeSpecName: "config") pod "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" (UID: "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.570832 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" (UID: "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.596054 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.596083 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.600738 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" (UID: "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.605232 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" (UID: "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.605393 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" (UID: "6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.659426 4880 scope.go:117] "RemoveContainer" containerID="f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183" Dec 01 03:14:32 crc kubenswrapper[4880]: E1201 03:14:32.660444 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183\": container with ID starting with f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183 not found: ID does not exist" containerID="f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.660479 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183"} err="failed to get container status \"f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183\": rpc error: code = NotFound desc = could not find container \"f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183\": container with ID starting with f58d3865c2ba97cf1c8e7b15e118b474ce391efac662c7e2ea8a1032b5c44183 not found: ID does not exist" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.660498 4880 scope.go:117] "RemoveContainer" containerID="7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744" Dec 01 03:14:32 crc kubenswrapper[4880]: E1201 03:14:32.660763 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744\": container with ID starting with 7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744 not found: ID does not exist" containerID="7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.660784 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744"} err="failed to get container status \"7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744\": rpc error: code = NotFound desc = could not find container \"7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744\": container with ID starting with 7c1fc539401b6d32577fdd49cc8c4e8c5093b3766704cf5e53de58ba2bd54744 not found: ID does not exist" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.698950 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.698992 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.699006 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.773766 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8669fc467f-m9rgg"] Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.802884 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8669fc467f-m9rgg"] Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.925415 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d55d9c58d-c2xlp"] Dec 01 03:14:32 crc kubenswrapper[4880]: E1201 03:14:32.925792 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3288e77-4e64-48d4-995e-93abe07bf1bd" containerName="barbican-db-sync" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.925803 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3288e77-4e64-48d4-995e-93abe07bf1bd" containerName="barbican-db-sync" Dec 01 03:14:32 crc kubenswrapper[4880]: E1201 03:14:32.925817 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb21b71d-303a-4e92-9086-789ded0f11fa" containerName="placement-db-sync" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.925823 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb21b71d-303a-4e92-9086-789ded0f11fa" containerName="placement-db-sync" Dec 01 03:14:32 crc kubenswrapper[4880]: E1201 03:14:32.925836 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" containerName="dnsmasq-dns" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.925842 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" containerName="dnsmasq-dns" Dec 01 03:14:32 crc kubenswrapper[4880]: E1201 03:14:32.925851 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" containerName="init" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.925858 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" containerName="init" Dec 01 03:14:32 crc kubenswrapper[4880]: E1201 03:14:32.925877 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15fa76f-467b-485e-96b8-5fdec71318f5" containerName="keystone-bootstrap" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.925883 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15fa76f-467b-485e-96b8-5fdec71318f5" containerName="keystone-bootstrap" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.926070 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3288e77-4e64-48d4-995e-93abe07bf1bd" containerName="barbican-db-sync" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.926085 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" containerName="dnsmasq-dns" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.926102 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb21b71d-303a-4e92-9086-789ded0f11fa" containerName="placement-db-sync" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.926111 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15fa76f-467b-485e-96b8-5fdec71318f5" containerName="keystone-bootstrap" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.926930 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.931317 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8nbnh" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.931491 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.931596 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.931697 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.931700 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.948487 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5c48bf866c-6nsdn"] Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.949478 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.958542 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hsw47" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.958721 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.958829 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.966190 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.966197 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 03:14:32 crc kubenswrapper[4880]: I1201 03:14:32.975713 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.005788 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d55d9c58d-c2xlp"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.027258 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-config-data\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.027392 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-config-data\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.027444 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-credential-keys\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.027469 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-public-tls-certs\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.027495 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-internal-tls-certs\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.027534 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0048d9d8-73dc-41fb-b99b-c04fa3919a76-logs\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.059246 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-internal-tls-certs\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.059304 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-public-tls-certs\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.059419 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-fernet-keys\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.059441 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-combined-ca-bundle\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.059474 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-scripts\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.059528 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-combined-ca-bundle\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.059555 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-scripts\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.059573 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kvrr\" (UniqueName: \"kubernetes.io/projected/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-kube-api-access-5kvrr\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.059626 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhz7m\" (UniqueName: \"kubernetes.io/projected/0048d9d8-73dc-41fb-b99b-c04fa3919a76-kube-api-access-fhz7m\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.095758 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c48bf866c-6nsdn"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166231 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-config-data\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166268 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-credential-keys\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166289 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-public-tls-certs\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166306 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-internal-tls-certs\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166327 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0048d9d8-73dc-41fb-b99b-c04fa3919a76-logs\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166348 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-internal-tls-certs\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166368 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-public-tls-certs\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166408 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-fernet-keys\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166424 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-combined-ca-bundle\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166439 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-scripts\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166466 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-combined-ca-bundle\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166483 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-scripts\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166497 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kvrr\" (UniqueName: \"kubernetes.io/projected/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-kube-api-access-5kvrr\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166521 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhz7m\" (UniqueName: \"kubernetes.io/projected/0048d9d8-73dc-41fb-b99b-c04fa3919a76-kube-api-access-fhz7m\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.166558 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-config-data\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.169073 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-75d7465bbc-p5vvw"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.170643 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.173172 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-config-data\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.175563 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-combined-ca-bundle\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.177739 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2qx9w" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.177910 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.189299 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-public-tls-certs\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.197571 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0048d9d8-73dc-41fb-b99b-c04fa3919a76-logs\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.208978 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-config-data\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.210430 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-internal-tls-certs\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.210978 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-fernet-keys\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.211041 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.211587 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-credential-keys\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.211795 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-public-tls-certs\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.211816 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-combined-ca-bundle\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.212096 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-scripts\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.212163 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-scripts\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.212542 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0048d9d8-73dc-41fb-b99b-c04fa3919a76-internal-tls-certs\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.244391 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhz7m\" (UniqueName: \"kubernetes.io/projected/0048d9d8-73dc-41fb-b99b-c04fa3919a76-kube-api-access-fhz7m\") pod \"placement-7d55d9c58d-c2xlp\" (UID: \"0048d9d8-73dc-41fb-b99b-c04fa3919a76\") " pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.244811 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kvrr\" (UniqueName: \"kubernetes.io/projected/e06205f3-1c76-4d4b-84a5-dc6c2948ad72-kube-api-access-5kvrr\") pod \"keystone-5c48bf866c-6nsdn\" (UID: \"e06205f3-1c76-4d4b-84a5-dc6c2948ad72\") " pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.250281 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.265534 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-75d7465bbc-p5vvw"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.270575 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543aa240-ad24-4448-b703-90932a3d3c48-logs\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.270605 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543aa240-ad24-4448-b703-90932a3d3c48-config-data\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.270632 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543aa240-ad24-4448-b703-90932a3d3c48-combined-ca-bundle\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.270656 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/543aa240-ad24-4448-b703-90932a3d3c48-config-data-custom\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.270696 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjsb8\" (UniqueName: \"kubernetes.io/projected/543aa240-ad24-4448-b703-90932a3d3c48-kube-api-access-pjsb8\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.286268 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6d88656f4-pgzp2"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.287790 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.313612 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.315082 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.328893 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d88656f4-pgzp2"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374178 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/543aa240-ad24-4448-b703-90932a3d3c48-config-data-custom\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374224 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a552f87-abed-418d-8167-f62d57f9c4d8-logs\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374248 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a552f87-abed-418d-8167-f62d57f9c4d8-combined-ca-bundle\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374302 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjsb8\" (UniqueName: \"kubernetes.io/projected/543aa240-ad24-4448-b703-90932a3d3c48-kube-api-access-pjsb8\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374358 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a552f87-abed-418d-8167-f62d57f9c4d8-config-data\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374438 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a552f87-abed-418d-8167-f62d57f9c4d8-config-data-custom\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374523 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543aa240-ad24-4448-b703-90932a3d3c48-logs\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374557 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543aa240-ad24-4448-b703-90932a3d3c48-config-data\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374583 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543aa240-ad24-4448-b703-90932a3d3c48-combined-ca-bundle\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.374698 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs782\" (UniqueName: \"kubernetes.io/projected/1a552f87-abed-418d-8167-f62d57f9c4d8-kube-api-access-rs782\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.383612 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543aa240-ad24-4448-b703-90932a3d3c48-logs\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.391921 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-579bf799d7-jfhbz"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.393387 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.394639 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543aa240-ad24-4448-b703-90932a3d3c48-combined-ca-bundle\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.402577 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/543aa240-ad24-4448-b703-90932a3d3c48-config-data-custom\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.410353 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjsb8\" (UniqueName: \"kubernetes.io/projected/543aa240-ad24-4448-b703-90932a3d3c48-kube-api-access-pjsb8\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.433367 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-579bf799d7-jfhbz"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478075 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-sb\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478115 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs782\" (UniqueName: \"kubernetes.io/projected/1a552f87-abed-418d-8167-f62d57f9c4d8-kube-api-access-rs782\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478144 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a552f87-abed-418d-8167-f62d57f9c4d8-logs\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478166 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a552f87-abed-418d-8167-f62d57f9c4d8-combined-ca-bundle\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478201 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-swift-storage-0\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478227 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-nb\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478253 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-config\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478275 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km458\" (UniqueName: \"kubernetes.io/projected/47e0e910-3465-4faf-96e3-68c70d730b79-kube-api-access-km458\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478303 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a552f87-abed-418d-8167-f62d57f9c4d8-config-data\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478320 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-svc\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.478926 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a552f87-abed-418d-8167-f62d57f9c4d8-config-data-custom\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.479852 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a552f87-abed-418d-8167-f62d57f9c4d8-logs\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.485351 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a552f87-abed-418d-8167-f62d57f9c4d8-config-data-custom\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.491403 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a552f87-abed-418d-8167-f62d57f9c4d8-combined-ca-bundle\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.492315 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a552f87-abed-418d-8167-f62d57f9c4d8-config-data\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.518620 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543aa240-ad24-4448-b703-90932a3d3c48-config-data\") pod \"barbican-worker-75d7465bbc-p5vvw\" (UID: \"543aa240-ad24-4448-b703-90932a3d3c48\") " pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.524974 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5856678bc8-5x7bn"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.535482 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.543326 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75d7465bbc-p5vvw" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.543484 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs782\" (UniqueName: \"kubernetes.io/projected/1a552f87-abed-418d-8167-f62d57f9c4d8-kube-api-access-rs782\") pod \"barbican-keystone-listener-6d88656f4-pgzp2\" (UID: \"1a552f87-abed-418d-8167-f62d57f9c4d8\") " pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.543817 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.571966 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5856678bc8-5x7bn"] Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.581938 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km458\" (UniqueName: \"kubernetes.io/projected/47e0e910-3465-4faf-96e3-68c70d730b79-kube-api-access-km458\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.581989 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data-custom\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.585112 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-svc\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.585255 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-sb\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.585319 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6dlp\" (UniqueName: \"kubernetes.io/projected/889ff544-5e69-4c30-8304-a429e4af1b0f-kube-api-access-l6dlp\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.585346 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/889ff544-5e69-4c30-8304-a429e4af1b0f-logs\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.585373 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-swift-storage-0\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.585396 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-combined-ca-bundle\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.585437 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-nb\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.585476 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-config\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.585501 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.586723 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-svc\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.587271 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-sb\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.588147 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-nb\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.588629 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-config\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.594188 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-swift-storage-0\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.680030 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.695115 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6dlp\" (UniqueName: \"kubernetes.io/projected/889ff544-5e69-4c30-8304-a429e4af1b0f-kube-api-access-l6dlp\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.699830 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/889ff544-5e69-4c30-8304-a429e4af1b0f-logs\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.699924 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-combined-ca-bundle\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.700039 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.700109 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data-custom\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.700708 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/889ff544-5e69-4c30-8304-a429e4af1b0f-logs\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.712847 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.716923 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-combined-ca-bundle\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.716955 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km458\" (UniqueName: \"kubernetes.io/projected/47e0e910-3465-4faf-96e3-68c70d730b79-kube-api-access-km458\") pod \"dnsmasq-dns-579bf799d7-jfhbz\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.728527 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data-custom\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.752113 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6dlp\" (UniqueName: \"kubernetes.io/projected/889ff544-5e69-4c30-8304-a429e4af1b0f-kube-api-access-l6dlp\") pod \"barbican-api-5856678bc8-5x7bn\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.787626 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:33 crc kubenswrapper[4880]: I1201 03:14:33.967676 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.133545 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d55d9c58d-c2xlp"] Dec 01 03:14:34 crc kubenswrapper[4880]: W1201 03:14:34.141158 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0048d9d8_73dc_41fb_b99b_c04fa3919a76.slice/crio-dd4df42465e96a68d305f1325ad65aa83be91d287ac389f441689b0edfa82fd3 WatchSource:0}: Error finding container dd4df42465e96a68d305f1325ad65aa83be91d287ac389f441689b0edfa82fd3: Status 404 returned error can't find the container with id dd4df42465e96a68d305f1325ad65aa83be91d287ac389f441689b0edfa82fd3 Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.316928 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-75d7465bbc-p5vvw"] Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.354727 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c48bf866c-6nsdn"] Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.522938 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d88656f4-pgzp2"] Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.704757 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c48bf866c-6nsdn" event={"ID":"e06205f3-1c76-4d4b-84a5-dc6c2948ad72","Type":"ContainerStarted","Data":"d52173ef45b1b1304e056d668ffbdaf9b12aac000dbf99e50900ba481cdc90c1"} Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.708687 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d55d9c58d-c2xlp" event={"ID":"0048d9d8-73dc-41fb-b99b-c04fa3919a76","Type":"ContainerStarted","Data":"67528dec7d4e55262f9b328df94dfbee423297996930d1ef610fe90752b5ed30"} Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.708917 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d55d9c58d-c2xlp" event={"ID":"0048d9d8-73dc-41fb-b99b-c04fa3919a76","Type":"ContainerStarted","Data":"dd4df42465e96a68d305f1325ad65aa83be91d287ac389f441689b0edfa82fd3"} Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.709842 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75d7465bbc-p5vvw" event={"ID":"543aa240-ad24-4448-b703-90932a3d3c48","Type":"ContainerStarted","Data":"95e7675352c44bc2dfc611b4b039bae9d733d9711fb36c76f2ec22db953abe79"} Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.711141 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lczzw" event={"ID":"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31","Type":"ContainerStarted","Data":"58a63e7f15c7a4d96e51e54f31ac1e76f800c5e5eeed04e56caa5760732a9e99"} Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.723154 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" event={"ID":"1a552f87-abed-418d-8167-f62d57f9c4d8","Type":"ContainerStarted","Data":"a491823caf4cdea4db3a6a554dc8fd2e5f11aa33a250bce6702174f378c40fc1"} Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.743034 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-579bf799d7-jfhbz"] Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.745765 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-lczzw" podStartSLOduration=6.3211736609999996 podStartE2EDuration="59.745748789s" podCreationTimestamp="2025-12-01 03:13:35 +0000 UTC" firstStartedPulling="2025-12-01 03:13:38.633488065 +0000 UTC m=+1048.144742437" lastFinishedPulling="2025-12-01 03:14:32.058063193 +0000 UTC m=+1101.569317565" observedRunningTime="2025-12-01 03:14:34.728936524 +0000 UTC m=+1104.240190896" watchObservedRunningTime="2025-12-01 03:14:34.745748789 +0000 UTC m=+1104.257003161" Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.782083 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5856678bc8-5x7bn"] Dec 01 03:14:34 crc kubenswrapper[4880]: I1201 03:14:34.857693 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9" path="/var/lib/kubelet/pods/6f8b7fa8-a0e4-43bb-87c4-f3311d8514c9/volumes" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.290701 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6ddc7fc844-5qd9h" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.442314 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56cc96959b-rrjz7" podUID="24a10152-f651-41de-9680-872d96690cd5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.746791 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5856678bc8-5x7bn" event={"ID":"889ff544-5e69-4c30-8304-a429e4af1b0f","Type":"ContainerStarted","Data":"21b2513810dd23a24b382d31036fdfbcee15b516f1d10507dbb2103e03583774"} Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.746833 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5856678bc8-5x7bn" event={"ID":"889ff544-5e69-4c30-8304-a429e4af1b0f","Type":"ContainerStarted","Data":"f46f883f710447f5f943bc294873f65548fef128288cc4ea8765587e339070c7"} Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.746843 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5856678bc8-5x7bn" event={"ID":"889ff544-5e69-4c30-8304-a429e4af1b0f","Type":"ContainerStarted","Data":"da4ef8199b94fa66d76fe8f22667a8cdfb5ec8779dec48285764a2a50113d61f"} Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.747903 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.747929 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.753334 4880 generic.go:334] "Generic (PLEG): container finished" podID="47e0e910-3465-4faf-96e3-68c70d730b79" containerID="61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008" exitCode=0 Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.753398 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" event={"ID":"47e0e910-3465-4faf-96e3-68c70d730b79","Type":"ContainerDied","Data":"61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008"} Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.753427 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" event={"ID":"47e0e910-3465-4faf-96e3-68c70d730b79","Type":"ContainerStarted","Data":"e0a14e8e5e9a02eebfa5d7870c4ec799c4c6a229f4b36caadda66e202fdbb481"} Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.760950 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c48bf866c-6nsdn" event={"ID":"e06205f3-1c76-4d4b-84a5-dc6c2948ad72","Type":"ContainerStarted","Data":"1a50259145bb2c952699bc92f2ed4aea26a0fce8962e63a0f8e8337a3b20433a"} Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.761019 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.773327 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d55d9c58d-c2xlp" event={"ID":"0048d9d8-73dc-41fb-b99b-c04fa3919a76","Type":"ContainerStarted","Data":"f5d7c678160fb463d84d81b2a70293039f39401f94c2b69ae2be82d6ca0d4df5"} Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.774062 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.774096 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.780809 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5856678bc8-5x7bn" podStartSLOduration=2.7807974189999998 podStartE2EDuration="2.780797419s" podCreationTimestamp="2025-12-01 03:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:35.780452591 +0000 UTC m=+1105.291706973" watchObservedRunningTime="2025-12-01 03:14:35.780797419 +0000 UTC m=+1105.292051791" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.829344 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5c48bf866c-6nsdn" podStartSLOduration=3.829324295 podStartE2EDuration="3.829324295s" podCreationTimestamp="2025-12-01 03:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:35.806977821 +0000 UTC m=+1105.318232193" watchObservedRunningTime="2025-12-01 03:14:35.829324295 +0000 UTC m=+1105.340578667" Dec 01 03:14:35 crc kubenswrapper[4880]: I1201 03:14:35.891388 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d55d9c58d-c2xlp" podStartSLOduration=3.891371023 podStartE2EDuration="3.891371023s" podCreationTimestamp="2025-12-01 03:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:35.847362471 +0000 UTC m=+1105.358616843" watchObservedRunningTime="2025-12-01 03:14:35.891371023 +0000 UTC m=+1105.402625395" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.893227 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d4bfbddfb-bh2gq"] Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.899601 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.919665 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.919855 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.941583 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d4bfbddfb-bh2gq"] Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.993484 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-config-data\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.993661 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-config-data-custom\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.993688 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-internal-tls-certs\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.993740 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-public-tls-certs\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.993769 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75jwt\" (UniqueName: \"kubernetes.io/projected/7f315b87-8b8b-4428-abbd-07d893337bdb-kube-api-access-75jwt\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.993786 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-combined-ca-bundle\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:36 crc kubenswrapper[4880]: I1201 03:14:36.993819 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f315b87-8b8b-4428-abbd-07d893337bdb-logs\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.095830 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-config-data-custom\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.096121 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-internal-tls-certs\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.096161 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-public-tls-certs\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.096189 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75jwt\" (UniqueName: \"kubernetes.io/projected/7f315b87-8b8b-4428-abbd-07d893337bdb-kube-api-access-75jwt\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.096211 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-combined-ca-bundle\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.096229 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f315b87-8b8b-4428-abbd-07d893337bdb-logs\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.096282 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-config-data\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.098481 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f315b87-8b8b-4428-abbd-07d893337bdb-logs\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.102271 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-combined-ca-bundle\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.103150 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-config-data\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.107271 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-public-tls-certs\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.107458 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-config-data-custom\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.107618 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f315b87-8b8b-4428-abbd-07d893337bdb-internal-tls-certs\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.119365 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75jwt\" (UniqueName: \"kubernetes.io/projected/7f315b87-8b8b-4428-abbd-07d893337bdb-kube-api-access-75jwt\") pod \"barbican-api-5d4bfbddfb-bh2gq\" (UID: \"7f315b87-8b8b-4428-abbd-07d893337bdb\") " pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.272613 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.840686 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" event={"ID":"47e0e910-3465-4faf-96e3-68c70d730b79","Type":"ContainerStarted","Data":"8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c"} Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.841602 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:37 crc kubenswrapper[4880]: I1201 03:14:37.900268 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" podStartSLOduration=4.900250389 podStartE2EDuration="4.900250389s" podCreationTimestamp="2025-12-01 03:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:37.890089852 +0000 UTC m=+1107.401344224" watchObservedRunningTime="2025-12-01 03:14:37.900250389 +0000 UTC m=+1107.411504761" Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.245112 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d4bfbddfb-bh2gq"] Dec 01 03:14:38 crc kubenswrapper[4880]: W1201 03:14:38.245910 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f315b87_8b8b_4428_abbd_07d893337bdb.slice/crio-22cb39028db6222439c0ba15895993dd5e4d689f8d18a1949460e4b08eece492 WatchSource:0}: Error finding container 22cb39028db6222439c0ba15895993dd5e4d689f8d18a1949460e4b08eece492: Status 404 returned error can't find the container with id 22cb39028db6222439c0ba15895993dd5e4d689f8d18a1949460e4b08eece492 Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.852289 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" event={"ID":"1a552f87-abed-418d-8167-f62d57f9c4d8","Type":"ContainerStarted","Data":"2ce27fa2f25519ab162bc85f629e48a7acffed29246daf23e1626369c8c233d2"} Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.852598 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" event={"ID":"1a552f87-abed-418d-8167-f62d57f9c4d8","Type":"ContainerStarted","Data":"5851a3961f683a6ba8016ae8387e1390e81ee6e9461481523e2c0c64ecb235ef"} Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.861594 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" event={"ID":"7f315b87-8b8b-4428-abbd-07d893337bdb","Type":"ContainerStarted","Data":"f2fbe0a646c10482cb441f2cb92bd8536c9bf2080ce636fc539f97fb2bc03599"} Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.861642 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" event={"ID":"7f315b87-8b8b-4428-abbd-07d893337bdb","Type":"ContainerStarted","Data":"07edaab743212f3d1b5e5fd3b250dd9657fd9e0f5133aa3ace325873436012fe"} Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.861652 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" event={"ID":"7f315b87-8b8b-4428-abbd-07d893337bdb","Type":"ContainerStarted","Data":"22cb39028db6222439c0ba15895993dd5e4d689f8d18a1949460e4b08eece492"} Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.862098 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.862178 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.869693 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75d7465bbc-p5vvw" event={"ID":"543aa240-ad24-4448-b703-90932a3d3c48","Type":"ContainerStarted","Data":"d11cebcfd1dc983bec4fdfdd56e5e7d4f9079127e7a06e958ad6676b55ec4dc0"} Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.869731 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75d7465bbc-p5vvw" event={"ID":"543aa240-ad24-4448-b703-90932a3d3c48","Type":"ContainerStarted","Data":"05e8752aee9c1d860bc3a89d4f32ae4f4c5c8ceb93fd83819426df8f6c52ad98"} Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.877961 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6d88656f4-pgzp2" podStartSLOduration=2.914637301 podStartE2EDuration="5.877940181s" podCreationTimestamp="2025-12-01 03:14:33 +0000 UTC" firstStartedPulling="2025-12-01 03:14:34.580249827 +0000 UTC m=+1104.091504189" lastFinishedPulling="2025-12-01 03:14:37.543552707 +0000 UTC m=+1107.054807069" observedRunningTime="2025-12-01 03:14:38.866717658 +0000 UTC m=+1108.377972030" watchObservedRunningTime="2025-12-01 03:14:38.877940181 +0000 UTC m=+1108.389194553" Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.905792 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" podStartSLOduration=2.905772605 podStartE2EDuration="2.905772605s" podCreationTimestamp="2025-12-01 03:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:38.883302187 +0000 UTC m=+1108.394556569" watchObservedRunningTime="2025-12-01 03:14:38.905772605 +0000 UTC m=+1108.417026967" Dec 01 03:14:38 crc kubenswrapper[4880]: I1201 03:14:38.921295 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-75d7465bbc-p5vvw" podStartSLOduration=2.714252229 podStartE2EDuration="5.921276126s" podCreationTimestamp="2025-12-01 03:14:33 +0000 UTC" firstStartedPulling="2025-12-01 03:14:34.345164768 +0000 UTC m=+1103.856419140" lastFinishedPulling="2025-12-01 03:14:37.552188665 +0000 UTC m=+1107.063443037" observedRunningTime="2025-12-01 03:14:38.901181139 +0000 UTC m=+1108.412435511" watchObservedRunningTime="2025-12-01 03:14:38.921276126 +0000 UTC m=+1108.432530488" Dec 01 03:14:40 crc kubenswrapper[4880]: I1201 03:14:40.891236 4880 generic.go:334] "Generic (PLEG): container finished" podID="81ee6695-1440-4087-b17a-0af2371eceed" containerID="303e097529623c44ececcbd246b782561ac6ab40f4a2853928bd50f834c952f0" exitCode=0 Dec 01 03:14:40 crc kubenswrapper[4880]: I1201 03:14:40.891459 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgzm4" event={"ID":"81ee6695-1440-4087-b17a-0af2371eceed","Type":"ContainerDied","Data":"303e097529623c44ececcbd246b782561ac6ab40f4a2853928bd50f834c952f0"} Dec 01 03:14:42 crc kubenswrapper[4880]: I1201 03:14:42.908497 4880 generic.go:334] "Generic (PLEG): container finished" podID="fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" containerID="58a63e7f15c7a4d96e51e54f31ac1e76f800c5e5eeed04e56caa5760732a9e99" exitCode=0 Dec 01 03:14:42 crc kubenswrapper[4880]: I1201 03:14:42.908568 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lczzw" event={"ID":"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31","Type":"ContainerDied","Data":"58a63e7f15c7a4d96e51e54f31ac1e76f800c5e5eeed04e56caa5760732a9e99"} Dec 01 03:14:43 crc kubenswrapper[4880]: I1201 03:14:43.794017 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:43 crc kubenswrapper[4880]: I1201 03:14:43.878640 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-577f8db8c5-k8vsc"] Dec 01 03:14:43 crc kubenswrapper[4880]: I1201 03:14:43.879122 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" podUID="8826ba7a-b62d-4a95-b97c-4bab19f08919" containerName="dnsmasq-dns" containerID="cri-o://798b8ca8a8317acd9e9448723b1682ec624edcc5f5ea1c2d99194ec80ed48c7d" gracePeriod=10 Dec 01 03:14:44 crc kubenswrapper[4880]: I1201 03:14:44.931180 4880 generic.go:334] "Generic (PLEG): container finished" podID="8826ba7a-b62d-4a95-b97c-4bab19f08919" containerID="798b8ca8a8317acd9e9448723b1682ec624edcc5f5ea1c2d99194ec80ed48c7d" exitCode=0 Dec 01 03:14:44 crc kubenswrapper[4880]: I1201 03:14:44.931271 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" event={"ID":"8826ba7a-b62d-4a95-b97c-4bab19f08919","Type":"ContainerDied","Data":"798b8ca8a8317acd9e9448723b1682ec624edcc5f5ea1c2d99194ec80ed48c7d"} Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.277061 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6ddc7fc844-5qd9h" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.441228 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56cc96959b-rrjz7" podUID="24a10152-f651-41de-9680-872d96690cd5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.594267 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgzm4" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.687241 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-combined-ca-bundle\") pod \"81ee6695-1440-4087-b17a-0af2371eceed\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.687410 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gzwb\" (UniqueName: \"kubernetes.io/projected/81ee6695-1440-4087-b17a-0af2371eceed-kube-api-access-8gzwb\") pod \"81ee6695-1440-4087-b17a-0af2371eceed\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.687459 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-config-data\") pod \"81ee6695-1440-4087-b17a-0af2371eceed\" (UID: \"81ee6695-1440-4087-b17a-0af2371eceed\") " Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.719032 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ee6695-1440-4087-b17a-0af2371eceed-kube-api-access-8gzwb" (OuterVolumeSpecName: "kube-api-access-8gzwb") pod "81ee6695-1440-4087-b17a-0af2371eceed" (UID: "81ee6695-1440-4087-b17a-0af2371eceed"). InnerVolumeSpecName "kube-api-access-8gzwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.789036 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gzwb\" (UniqueName: \"kubernetes.io/projected/81ee6695-1440-4087-b17a-0af2371eceed-kube-api-access-8gzwb\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.813366 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-config-data" (OuterVolumeSpecName: "config-data") pod "81ee6695-1440-4087-b17a-0af2371eceed" (UID: "81ee6695-1440-4087-b17a-0af2371eceed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.818494 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ee6695-1440-4087-b17a-0af2371eceed" (UID: "81ee6695-1440-4087-b17a-0af2371eceed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.890599 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.890632 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ee6695-1440-4087-b17a-0af2371eceed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.940162 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mgzm4" event={"ID":"81ee6695-1440-4087-b17a-0af2371eceed","Type":"ContainerDied","Data":"1ec45f1612c1c13fb15157c25792dab2a6b203a187f57a89e31ede08bf7d3286"} Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.940197 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ec45f1612c1c13fb15157c25792dab2a6b203a187f57a89e31ede08bf7d3286" Dec 01 03:14:45 crc kubenswrapper[4880]: I1201 03:14:45.940266 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mgzm4" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.558259 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.749281 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.751722 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lczzw" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812609 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-sb\") pod \"8826ba7a-b62d-4a95-b97c-4bab19f08919\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812675 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-db-sync-config-data\") pod \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812712 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-combined-ca-bundle\") pod \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812787 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-svc\") pod \"8826ba7a-b62d-4a95-b97c-4bab19f08919\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812809 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-config\") pod \"8826ba7a-b62d-4a95-b97c-4bab19f08919\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812842 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fps64\" (UniqueName: \"kubernetes.io/projected/8826ba7a-b62d-4a95-b97c-4bab19f08919-kube-api-access-fps64\") pod \"8826ba7a-b62d-4a95-b97c-4bab19f08919\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812858 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcvhq\" (UniqueName: \"kubernetes.io/projected/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-kube-api-access-xcvhq\") pod \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812890 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-config-data\") pod \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812936 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-scripts\") pod \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812964 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-etc-machine-id\") pod \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\" (UID: \"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.812999 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-swift-storage-0\") pod \"8826ba7a-b62d-4a95-b97c-4bab19f08919\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.813044 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-nb\") pod \"8826ba7a-b62d-4a95-b97c-4bab19f08919\" (UID: \"8826ba7a-b62d-4a95-b97c-4bab19f08919\") " Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.830329 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" (UID: "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.852082 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8826ba7a-b62d-4a95-b97c-4bab19f08919-kube-api-access-fps64" (OuterVolumeSpecName: "kube-api-access-fps64") pod "8826ba7a-b62d-4a95-b97c-4bab19f08919" (UID: "8826ba7a-b62d-4a95-b97c-4bab19f08919"). InnerVolumeSpecName "kube-api-access-fps64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.858823 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-scripts" (OuterVolumeSpecName: "scripts") pod "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" (UID: "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.887452 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" (UID: "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.888647 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-kube-api-access-xcvhq" (OuterVolumeSpecName: "kube-api-access-xcvhq") pod "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" (UID: "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31"). InnerVolumeSpecName "kube-api-access-xcvhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.917144 4880 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.917415 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fps64\" (UniqueName: \"kubernetes.io/projected/8826ba7a-b62d-4a95-b97c-4bab19f08919-kube-api-access-fps64\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.917481 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcvhq\" (UniqueName: \"kubernetes.io/projected/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-kube-api-access-xcvhq\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.917540 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.917594 4880 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.965623 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8826ba7a-b62d-4a95-b97c-4bab19f08919" (UID: "8826ba7a-b62d-4a95-b97c-4bab19f08919"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.977014 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" (UID: "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.977352 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" Dec 01 03:14:46 crc kubenswrapper[4880]: I1201 03:14:46.988126 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lczzw" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.000585 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-config" (OuterVolumeSpecName: "config") pod "8826ba7a-b62d-4a95-b97c-4bab19f08919" (UID: "8826ba7a-b62d-4a95-b97c-4bab19f08919"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.010015 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-config-data" (OuterVolumeSpecName: "config-data") pod "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" (UID: "fe59d4ff-1b09-4404-a45d-4b2b73e3ac31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.010790 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.010905 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" event={"ID":"8826ba7a-b62d-4a95-b97c-4bab19f08919","Type":"ContainerDied","Data":"bc71223837848571a1d6d9fed123299293e008af73d611c85775ea3779e1bcdb"} Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.010987 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lczzw" event={"ID":"fe59d4ff-1b09-4404-a45d-4b2b73e3ac31","Type":"ContainerDied","Data":"a7722f7c911aaf4d1eb32383c7f023e42e2d30dbd358600b1409bd64576a5739"} Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.011051 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7722f7c911aaf4d1eb32383c7f023e42e2d30dbd358600b1409bd64576a5739" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.011129 4880 scope.go:117] "RemoveContainer" containerID="798b8ca8a8317acd9e9448723b1682ec624edcc5f5ea1c2d99194ec80ed48c7d" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.023784 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.023813 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.023823 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.023831 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.031444 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8826ba7a-b62d-4a95-b97c-4bab19f08919" (UID: "8826ba7a-b62d-4a95-b97c-4bab19f08919"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.049917 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8826ba7a-b62d-4a95-b97c-4bab19f08919" (UID: "8826ba7a-b62d-4a95-b97c-4bab19f08919"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.068433 4880 scope.go:117] "RemoveContainer" containerID="1b94c7675392d0818d2b5dc2adae225ae9a3eff6f041f18f30338ebc707c2db6" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.086672 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8826ba7a-b62d-4a95-b97c-4bab19f08919" (UID: "8826ba7a-b62d-4a95-b97c-4bab19f08919"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.127998 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.128025 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.128034 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8826ba7a-b62d-4a95-b97c-4bab19f08919-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:47 crc kubenswrapper[4880]: E1201 03:14:47.252019 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.305927 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-577f8db8c5-k8vsc"] Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.320353 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-577f8db8c5-k8vsc"] Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.368608 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.368820 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.391589 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.997550 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"010f41a5-3ac7-48d3-b20c-e9b8add221ca","Type":"ContainerStarted","Data":"052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c"} Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.997686 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="ceilometer-notification-agent" containerID="cri-o://8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35" gracePeriod=30 Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.997733 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="sg-core" containerID="cri-o://d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c" gracePeriod=30 Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.997731 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="proxy-httpd" containerID="cri-o://052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c" gracePeriod=30 Dec 01 03:14:47 crc kubenswrapper[4880]: I1201 03:14:47.998084 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.114593 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 03:14:48 crc kubenswrapper[4880]: E1201 03:14:48.114962 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8826ba7a-b62d-4a95-b97c-4bab19f08919" containerName="dnsmasq-dns" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.114976 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8826ba7a-b62d-4a95-b97c-4bab19f08919" containerName="dnsmasq-dns" Dec 01 03:14:48 crc kubenswrapper[4880]: E1201 03:14:48.114999 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8826ba7a-b62d-4a95-b97c-4bab19f08919" containerName="init" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.115006 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8826ba7a-b62d-4a95-b97c-4bab19f08919" containerName="init" Dec 01 03:14:48 crc kubenswrapper[4880]: E1201 03:14:48.115027 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ee6695-1440-4087-b17a-0af2371eceed" containerName="heat-db-sync" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.115033 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ee6695-1440-4087-b17a-0af2371eceed" containerName="heat-db-sync" Dec 01 03:14:48 crc kubenswrapper[4880]: E1201 03:14:48.115046 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" containerName="cinder-db-sync" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.115052 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" containerName="cinder-db-sync" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.115211 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8826ba7a-b62d-4a95-b97c-4bab19f08919" containerName="dnsmasq-dns" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.115228 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ee6695-1440-4087-b17a-0af2371eceed" containerName="heat-db-sync" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.115238 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" containerName="cinder-db-sync" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.172014 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.189150 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.189350 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-6kc8g" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.189509 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.190384 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.203961 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.229962 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-585499bb75-ggpgg"] Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.232442 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.284769 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-585499bb75-ggpgg"] Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.375978 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggj2\" (UniqueName: \"kubernetes.io/projected/85741d09-57c9-4c00-8a01-70258aae8f7b-kube-api-access-pggj2\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376261 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376280 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-svc\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376320 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-config\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376345 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-scripts\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376381 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-sb\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376408 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw27t\" (UniqueName: \"kubernetes.io/projected/22261634-3cd2-4faf-9264-9234fa4b43ca-kube-api-access-jw27t\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376436 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-nb\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376453 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376479 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-swift-storage-0\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376496 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85741d09-57c9-4c00-8a01-70258aae8f7b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.376516 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.423023 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.424524 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.435231 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.473007 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.477770 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-nb\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.477814 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.477848 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-swift-storage-0\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.477879 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85741d09-57c9-4c00-8a01-70258aae8f7b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.477902 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.477947 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggj2\" (UniqueName: \"kubernetes.io/projected/85741d09-57c9-4c00-8a01-70258aae8f7b-kube-api-access-pggj2\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.477962 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.477979 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-svc\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.478018 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-config\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.478040 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-scripts\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.478084 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-sb\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.478115 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw27t\" (UniqueName: \"kubernetes.io/projected/22261634-3cd2-4faf-9264-9234fa4b43ca-kube-api-access-jw27t\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.479106 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-nb\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.487982 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85741d09-57c9-4c00-8a01-70258aae8f7b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.488632 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-swift-storage-0\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.492898 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-config\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.497064 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-svc\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.511232 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-sb\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.519630 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-scripts\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.528119 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.529065 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.529710 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.532429 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggj2\" (UniqueName: \"kubernetes.io/projected/85741d09-57c9-4c00-8a01-70258aae8f7b-kube-api-access-pggj2\") pod \"cinder-scheduler-0\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.541679 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw27t\" (UniqueName: \"kubernetes.io/projected/22261634-3cd2-4faf-9264-9234fa4b43ca-kube-api-access-jw27t\") pod \"dnsmasq-dns-585499bb75-ggpgg\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.579312 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.579572 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data-custom\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.579668 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad61e418-7135-4561-af80-28a601030e3b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.579741 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-scripts\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.579806 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad61e418-7135-4561-af80-28a601030e3b-logs\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.579894 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98j47\" (UniqueName: \"kubernetes.io/projected/ad61e418-7135-4561-af80-28a601030e3b-kube-api-access-98j47\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.580026 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.580266 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.682469 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.682505 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.682556 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data-custom\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.682592 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad61e418-7135-4561-af80-28a601030e3b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.682612 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-scripts\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.682627 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad61e418-7135-4561-af80-28a601030e3b-logs\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.682650 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98j47\" (UniqueName: \"kubernetes.io/projected/ad61e418-7135-4561-af80-28a601030e3b-kube-api-access-98j47\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.685107 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad61e418-7135-4561-af80-28a601030e3b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.687334 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad61e418-7135-4561-af80-28a601030e3b-logs\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.687982 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.689685 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-scripts\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.700528 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.721449 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data-custom\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.721453 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98j47\" (UniqueName: \"kubernetes.io/projected/ad61e418-7135-4561-af80-28a601030e3b-kube-api-access-98j47\") pod \"cinder-api-0\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " pod="openstack/cinder-api-0" Dec 01 03:14:48 crc kubenswrapper[4880]: E1201 03:14:48.782996 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod010f41a5_3ac7_48d3_b20c_e9b8add221ca.slice/crio-conmon-052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod010f41a5_3ac7_48d3_b20c_e9b8add221ca.slice/crio-052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c.scope\": RecentStats: unable to find data in memory cache]" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.811061 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8826ba7a-b62d-4a95-b97c-4bab19f08919" path="/var/lib/kubelet/pods/8826ba7a-b62d-4a95-b97c-4bab19f08919/volumes" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.840268 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 03:14:48 crc kubenswrapper[4880]: I1201 03:14:48.982571 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.029199 4880 generic.go:334] "Generic (PLEG): container finished" podID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerID="052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c" exitCode=0 Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.029572 4880 generic.go:334] "Generic (PLEG): container finished" podID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerID="d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c" exitCode=2 Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.029593 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"010f41a5-3ac7-48d3-b20c-e9b8add221ca","Type":"ContainerDied","Data":"052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c"} Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.029619 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"010f41a5-3ac7-48d3-b20c-e9b8add221ca","Type":"ContainerDied","Data":"d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c"} Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.187251 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-585499bb75-ggpgg"] Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.516307 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.544775 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 03:14:49 crc kubenswrapper[4880]: W1201 03:14:49.643653 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85741d09_57c9_4c00_8a01_70258aae8f7b.slice/crio-60b0909cccc34a3ef9cf69a96358501062bf75e93b326dc6e99c4676d5089811 WatchSource:0}: Error finding container 60b0909cccc34a3ef9cf69a96358501062bf75e93b326dc6e99c4676d5089811: Status 404 returned error can't find the container with id 60b0909cccc34a3ef9cf69a96358501062bf75e93b326dc6e99c4676d5089811 Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.708947 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7877878478-zq76n"] Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.709175 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7877878478-zq76n" podUID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerName="neutron-api" containerID="cri-o://77e9e062b9ae6444a7b89c7660df7c4cac97fb89df8a998362c21a097b2908f7" gracePeriod=30 Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.709419 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7877878478-zq76n" podUID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerName="neutron-httpd" containerID="cri-o://380169dcb254095b355faf5dd1dfa0d266524fd767be92ab5d1b06667b934dfc" gracePeriod=30 Dec 01 03:14:49 crc kubenswrapper[4880]: I1201 03:14:49.763103 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 03:14:50 crc kubenswrapper[4880]: I1201 03:14:50.046978 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85741d09-57c9-4c00-8a01-70258aae8f7b","Type":"ContainerStarted","Data":"60b0909cccc34a3ef9cf69a96358501062bf75e93b326dc6e99c4676d5089811"} Dec 01 03:14:50 crc kubenswrapper[4880]: I1201 03:14:50.058560 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad61e418-7135-4561-af80-28a601030e3b","Type":"ContainerStarted","Data":"b628f301e210bdcc76839f33028508da81ccc76376d5e06471d686cc8e0a60e9"} Dec 01 03:14:50 crc kubenswrapper[4880]: I1201 03:14:50.064385 4880 generic.go:334] "Generic (PLEG): container finished" podID="22261634-3cd2-4faf-9264-9234fa4b43ca" containerID="44ab0fefae3315753fb1181ff2051a0f5c3eb3b66b5cac2a27675746879db49f" exitCode=0 Dec 01 03:14:50 crc kubenswrapper[4880]: I1201 03:14:50.064408 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" event={"ID":"22261634-3cd2-4faf-9264-9234fa4b43ca","Type":"ContainerDied","Data":"44ab0fefae3315753fb1181ff2051a0f5c3eb3b66b5cac2a27675746879db49f"} Dec 01 03:14:50 crc kubenswrapper[4880]: I1201 03:14:50.064422 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" event={"ID":"22261634-3cd2-4faf-9264-9234fa4b43ca","Type":"ContainerStarted","Data":"855cb176751d1a94c963a6825030682c0caef8a9a068be7509712459bcde4b21"} Dec 01 03:14:51 crc kubenswrapper[4880]: I1201 03:14:51.011095 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5856678bc8-5x7bn" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 03:14:51 crc kubenswrapper[4880]: I1201 03:14:51.079286 4880 generic.go:334] "Generic (PLEG): container finished" podID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerID="380169dcb254095b355faf5dd1dfa0d266524fd767be92ab5d1b06667b934dfc" exitCode=0 Dec 01 03:14:51 crc kubenswrapper[4880]: I1201 03:14:51.079357 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877878478-zq76n" event={"ID":"f883fbd2-c0ad-4d3e-a56d-c99c361b6439","Type":"ContainerDied","Data":"380169dcb254095b355faf5dd1dfa0d266524fd767be92ab5d1b06667b934dfc"} Dec 01 03:14:51 crc kubenswrapper[4880]: I1201 03:14:51.081713 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" event={"ID":"22261634-3cd2-4faf-9264-9234fa4b43ca","Type":"ContainerStarted","Data":"97758e157c7df9f6d3d14bcd987018f39501d94a77d192c407c6b4b9dc126063"} Dec 01 03:14:51 crc kubenswrapper[4880]: I1201 03:14:51.081840 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:51 crc kubenswrapper[4880]: I1201 03:14:51.100643 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" podStartSLOduration=3.100619574 podStartE2EDuration="3.100619574s" podCreationTimestamp="2025-12-01 03:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:51.096518201 +0000 UTC m=+1120.607772573" watchObservedRunningTime="2025-12-01 03:14:51.100619574 +0000 UTC m=+1120.611873936" Dec 01 03:14:51 crc kubenswrapper[4880]: I1201 03:14:51.280027 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" podUID="7f315b87-8b8b-4428-abbd-07d893337bdb" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:14:51 crc kubenswrapper[4880]: I1201 03:14:51.327084 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-577f8db8c5-k8vsc" podUID="8826ba7a-b62d-4a95-b97c-4bab19f08919" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Dec 01 03:14:51 crc kubenswrapper[4880]: I1201 03:14:51.986889 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.015050 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.108767 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85741d09-57c9-4c00-8a01-70258aae8f7b","Type":"ContainerStarted","Data":"b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c"} Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.141577 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad61e418-7135-4561-af80-28a601030e3b","Type":"ContainerStarted","Data":"5da5fe1ed2572c5189882f5b93aae833278063bb0d054b3163cd6c11ac8028fe"} Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.174167 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.326058 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" podUID="7f315b87-8b8b-4428-abbd-07d893337bdb" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.326689 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" podUID="7f315b87-8b8b-4428-abbd-07d893337bdb" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.368902 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.369958 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d4bfbddfb-bh2gq" Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.433057 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5856678bc8-5x7bn" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.455033 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5856678bc8-5x7bn"] Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.455337 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5856678bc8-5x7bn" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api-log" containerID="cri-o://f46f883f710447f5f943bc294873f65548fef128288cc4ea8765587e339070c7" gracePeriod=30 Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.455788 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5856678bc8-5x7bn" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api" containerID="cri-o://21b2513810dd23a24b382d31036fdfbcee15b516f1d10507dbb2103e03583774" gracePeriod=30 Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.514302 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5856678bc8-5x7bn" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": EOF" Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.538190 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 01 03:14:52 crc kubenswrapper[4880]: I1201 03:14:52.959800 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.178250 4880 generic.go:334] "Generic (PLEG): container finished" podID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerID="f46f883f710447f5f943bc294873f65548fef128288cc4ea8765587e339070c7" exitCode=143 Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.178385 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5856678bc8-5x7bn" event={"ID":"889ff544-5e69-4c30-8304-a429e4af1b0f","Type":"ContainerDied","Data":"f46f883f710447f5f943bc294873f65548fef128288cc4ea8765587e339070c7"} Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.193606 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85741d09-57c9-4c00-8a01-70258aae8f7b","Type":"ContainerStarted","Data":"a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925"} Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.196962 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ad61e418-7135-4561-af80-28a601030e3b" containerName="cinder-api-log" containerID="cri-o://5da5fe1ed2572c5189882f5b93aae833278063bb0d054b3163cd6c11ac8028fe" gracePeriod=30 Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.197039 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad61e418-7135-4561-af80-28a601030e3b","Type":"ContainerStarted","Data":"dd3de9c28a81ad0cc2e5d47aa8e0ff0cceff227768670c142654202d23e105f0"} Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.197487 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.197522 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ad61e418-7135-4561-af80-28a601030e3b" containerName="cinder-api" containerID="cri-o://dd3de9c28a81ad0cc2e5d47aa8e0ff0cceff227768670c142654202d23e105f0" gracePeriod=30 Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.228608 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.796512744 podStartE2EDuration="5.228585759s" podCreationTimestamp="2025-12-01 03:14:48 +0000 UTC" firstStartedPulling="2025-12-01 03:14:49.66091009 +0000 UTC m=+1119.172164462" lastFinishedPulling="2025-12-01 03:14:50.092983105 +0000 UTC m=+1119.604237477" observedRunningTime="2025-12-01 03:14:53.217445688 +0000 UTC m=+1122.728700070" watchObservedRunningTime="2025-12-01 03:14:53.228585759 +0000 UTC m=+1122.739840131" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.242205 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.242189863 podStartE2EDuration="5.242189863s" podCreationTimestamp="2025-12-01 03:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:14:53.234373185 +0000 UTC m=+1122.745627557" watchObservedRunningTime="2025-12-01 03:14:53.242189863 +0000 UTC m=+1122.753444235" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.767171 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.840826 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.877918 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-config-data\") pod \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.878270 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-scripts\") pod \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.878399 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jbds\" (UniqueName: \"kubernetes.io/projected/010f41a5-3ac7-48d3-b20c-e9b8add221ca-kube-api-access-5jbds\") pod \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.878504 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-combined-ca-bundle\") pod \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.878625 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-sg-core-conf-yaml\") pod \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.878755 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-log-httpd\") pod \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.878847 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-run-httpd\") pod \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\" (UID: \"010f41a5-3ac7-48d3-b20c-e9b8add221ca\") " Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.894987 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "010f41a5-3ac7-48d3-b20c-e9b8add221ca" (UID: "010f41a5-3ac7-48d3-b20c-e9b8add221ca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.898013 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "010f41a5-3ac7-48d3-b20c-e9b8add221ca" (UID: "010f41a5-3ac7-48d3-b20c-e9b8add221ca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.912220 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-scripts" (OuterVolumeSpecName: "scripts") pod "010f41a5-3ac7-48d3-b20c-e9b8add221ca" (UID: "010f41a5-3ac7-48d3-b20c-e9b8add221ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.921752 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/010f41a5-3ac7-48d3-b20c-e9b8add221ca-kube-api-access-5jbds" (OuterVolumeSpecName: "kube-api-access-5jbds") pod "010f41a5-3ac7-48d3-b20c-e9b8add221ca" (UID: "010f41a5-3ac7-48d3-b20c-e9b8add221ca"). InnerVolumeSpecName "kube-api-access-5jbds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.962518 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "010f41a5-3ac7-48d3-b20c-e9b8add221ca" (UID: "010f41a5-3ac7-48d3-b20c-e9b8add221ca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.981871 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.981907 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jbds\" (UniqueName: \"kubernetes.io/projected/010f41a5-3ac7-48d3-b20c-e9b8add221ca-kube-api-access-5jbds\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.981918 4880 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.981927 4880 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.981936 4880 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/010f41a5-3ac7-48d3-b20c-e9b8add221ca-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:53 crc kubenswrapper[4880]: I1201 03:14:53.999168 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "010f41a5-3ac7-48d3-b20c-e9b8add221ca" (UID: "010f41a5-3ac7-48d3-b20c-e9b8add221ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.024091 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-config-data" (OuterVolumeSpecName: "config-data") pod "010f41a5-3ac7-48d3-b20c-e9b8add221ca" (UID: "010f41a5-3ac7-48d3-b20c-e9b8add221ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.083705 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.083733 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/010f41a5-3ac7-48d3-b20c-e9b8add221ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.205814 4880 generic.go:334] "Generic (PLEG): container finished" podID="ad61e418-7135-4561-af80-28a601030e3b" containerID="5da5fe1ed2572c5189882f5b93aae833278063bb0d054b3163cd6c11ac8028fe" exitCode=143 Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.205909 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad61e418-7135-4561-af80-28a601030e3b","Type":"ContainerDied","Data":"5da5fe1ed2572c5189882f5b93aae833278063bb0d054b3163cd6c11ac8028fe"} Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.207779 4880 generic.go:334] "Generic (PLEG): container finished" podID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerID="8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35" exitCode=0 Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.207840 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.207891 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"010f41a5-3ac7-48d3-b20c-e9b8add221ca","Type":"ContainerDied","Data":"8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35"} Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.207919 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"010f41a5-3ac7-48d3-b20c-e9b8add221ca","Type":"ContainerDied","Data":"2798d4d74779468d8aebcc914157aa6efcdf1b3f21147abc667e58286400338e"} Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.207935 4880 scope.go:117] "RemoveContainer" containerID="052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.233870 4880 scope.go:117] "RemoveContainer" containerID="d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.261844 4880 scope.go:117] "RemoveContainer" containerID="8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.278246 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.305280 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.316843 4880 scope.go:117] "RemoveContainer" containerID="052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c" Dec 01 03:14:54 crc kubenswrapper[4880]: E1201 03:14:54.317171 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c\": container with ID starting with 052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c not found: ID does not exist" containerID="052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.317201 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c"} err="failed to get container status \"052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c\": rpc error: code = NotFound desc = could not find container \"052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c\": container with ID starting with 052a243db0a0520fd32c824a823e7c6788a22250bd22240e3118706ea578a09c not found: ID does not exist" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.317221 4880 scope.go:117] "RemoveContainer" containerID="d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c" Dec 01 03:14:54 crc kubenswrapper[4880]: E1201 03:14:54.317393 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c\": container with ID starting with d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c not found: ID does not exist" containerID="d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.317410 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c"} err="failed to get container status \"d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c\": rpc error: code = NotFound desc = could not find container \"d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c\": container with ID starting with d7c004f54f16ad7ae13b19335f8c108f24836347323880454be6b8e1e58d466c not found: ID does not exist" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.317421 4880 scope.go:117] "RemoveContainer" containerID="8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35" Dec 01 03:14:54 crc kubenswrapper[4880]: E1201 03:14:54.317575 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35\": container with ID starting with 8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35 not found: ID does not exist" containerID="8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.317589 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35"} err="failed to get container status \"8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35\": rpc error: code = NotFound desc = could not find container \"8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35\": container with ID starting with 8f15664f049c58211b518ccd4f8d5a356f262a061d06bcfee77acf2655565c35 not found: ID does not exist" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.334411 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:14:54 crc kubenswrapper[4880]: E1201 03:14:54.334816 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="proxy-httpd" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.334832 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="proxy-httpd" Dec 01 03:14:54 crc kubenswrapper[4880]: E1201 03:14:54.334845 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="ceilometer-notification-agent" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.334852 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="ceilometer-notification-agent" Dec 01 03:14:54 crc kubenswrapper[4880]: E1201 03:14:54.334873 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="sg-core" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.334891 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="sg-core" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.335085 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="sg-core" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.335107 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="ceilometer-notification-agent" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.335127 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" containerName="proxy-httpd" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.336636 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.344273 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.344741 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.345010 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.489774 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.489909 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-log-httpd\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.489962 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7pg\" (UniqueName: \"kubernetes.io/projected/e50f15dc-ed5f-4f63-872d-645d388b3d18-kube-api-access-pv7pg\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.489990 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-scripts\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.490019 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-config-data\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.490038 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.490057 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-run-httpd\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.591845 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7pg\" (UniqueName: \"kubernetes.io/projected/e50f15dc-ed5f-4f63-872d-645d388b3d18-kube-api-access-pv7pg\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.592173 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-scripts\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.592274 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-config-data\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.592360 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.592447 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-run-httpd\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.592650 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.592780 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-log-httpd\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.592934 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-run-httpd\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.593219 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-log-httpd\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.598036 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-config-data\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.599271 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-scripts\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.599394 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.599578 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.630212 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7pg\" (UniqueName: \"kubernetes.io/projected/e50f15dc-ed5f-4f63-872d-645d388b3d18-kube-api-access-pv7pg\") pod \"ceilometer-0\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.680411 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:14:54 crc kubenswrapper[4880]: I1201 03:14:54.801695 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="010f41a5-3ac7-48d3-b20c-e9b8add221ca" path="/var/lib/kubelet/pods/010f41a5-3ac7-48d3-b20c-e9b8add221ca/volumes" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.219209 4880 generic.go:334] "Generic (PLEG): container finished" podID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerID="77e9e062b9ae6444a7b89c7660df7c4cac97fb89df8a998362c21a097b2908f7" exitCode=0 Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.219296 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877878478-zq76n" event={"ID":"f883fbd2-c0ad-4d3e-a56d-c99c361b6439","Type":"ContainerDied","Data":"77e9e062b9ae6444a7b89c7660df7c4cac97fb89df8a998362c21a097b2908f7"} Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.276526 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6ddc7fc844-5qd9h" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.276763 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.277484 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"ca4abb4a90b26185324b9145545abeafcf27374b78455ab6064a09cf34a460ca"} pod="openstack/horizon-6ddc7fc844-5qd9h" containerMessage="Container horizon failed startup probe, will be restarted" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.277515 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6ddc7fc844-5qd9h" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" containerID="cri-o://ca4abb4a90b26185324b9145545abeafcf27374b78455ab6064a09cf34a460ca" gracePeriod=30 Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.281125 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.353986 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.515160 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-config\") pod \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.515302 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-ovndb-tls-certs\") pod \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.515385 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qln9\" (UniqueName: \"kubernetes.io/projected/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-kube-api-access-5qln9\") pod \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.515488 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-httpd-config\") pod \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.515617 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-combined-ca-bundle\") pod \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\" (UID: \"f883fbd2-c0ad-4d3e-a56d-c99c361b6439\") " Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.520194 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f883fbd2-c0ad-4d3e-a56d-c99c361b6439" (UID: "f883fbd2-c0ad-4d3e-a56d-c99c361b6439"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.521761 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-kube-api-access-5qln9" (OuterVolumeSpecName: "kube-api-access-5qln9") pod "f883fbd2-c0ad-4d3e-a56d-c99c361b6439" (UID: "f883fbd2-c0ad-4d3e-a56d-c99c361b6439"). InnerVolumeSpecName "kube-api-access-5qln9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.571016 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-config" (OuterVolumeSpecName: "config") pod "f883fbd2-c0ad-4d3e-a56d-c99c361b6439" (UID: "f883fbd2-c0ad-4d3e-a56d-c99c361b6439"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.578893 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f883fbd2-c0ad-4d3e-a56d-c99c361b6439" (UID: "f883fbd2-c0ad-4d3e-a56d-c99c361b6439"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.594084 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f883fbd2-c0ad-4d3e-a56d-c99c361b6439" (UID: "f883fbd2-c0ad-4d3e-a56d-c99c361b6439"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.619718 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.620006 4880 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.620018 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qln9\" (UniqueName: \"kubernetes.io/projected/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-kube-api-access-5qln9\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.620027 4880 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:55 crc kubenswrapper[4880]: I1201 03:14:55.620035 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f883fbd2-c0ad-4d3e-a56d-c99c361b6439-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.232620 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7877878478-zq76n" Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.232611 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7877878478-zq76n" event={"ID":"f883fbd2-c0ad-4d3e-a56d-c99c361b6439","Type":"ContainerDied","Data":"e0933fa61e5438d02c8cc46902fc4ab36b34544e4a9e5027ed3ff15e64401671"} Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.232760 4880 scope.go:117] "RemoveContainer" containerID="380169dcb254095b355faf5dd1dfa0d266524fd767be92ab5d1b06667b934dfc" Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.236151 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerStarted","Data":"a9d7e883ebd01133d3eb1c141114d86ecdd1ba6fce788be2ba85f5873c0ce8aa"} Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.236191 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerStarted","Data":"189cc505df6e96013da01bc755f07d64f4c74133b749d59a5cc99cf77a90114c"} Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.236202 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerStarted","Data":"71545db480525d418feaa2e57e424ad16733ce6ebd25fe7539c0c6168e7ad508"} Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.277249 4880 scope.go:117] "RemoveContainer" containerID="77e9e062b9ae6444a7b89c7660df7c4cac97fb89df8a998362c21a097b2908f7" Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.305167 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7877878478-zq76n"] Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.311712 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7877878478-zq76n"] Dec 01 03:14:56 crc kubenswrapper[4880]: I1201 03:14:56.803743 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" path="/var/lib/kubelet/pods/f883fbd2-c0ad-4d3e-a56d-c99c361b6439/volumes" Dec 01 03:14:57 crc kubenswrapper[4880]: I1201 03:14:57.253053 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerStarted","Data":"427c50ec3e3d1bbecb6fd33a4262e4ddb3f2d8ee7662921c2d8b684f98bfa43c"} Dec 01 03:14:57 crc kubenswrapper[4880]: I1201 03:14:57.587386 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:14:57 crc kubenswrapper[4880]: I1201 03:14:57.897802 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5856678bc8-5x7bn" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:51434->10.217.0.161:9311: read: connection reset by peer" Dec 01 03:14:57 crc kubenswrapper[4880]: I1201 03:14:57.897804 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5856678bc8-5x7bn" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:51436->10.217.0.161:9311: read: connection reset by peer" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.267334 4880 generic.go:334] "Generic (PLEG): container finished" podID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerID="21b2513810dd23a24b382d31036fdfbcee15b516f1d10507dbb2103e03583774" exitCode=0 Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.267392 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5856678bc8-5x7bn" event={"ID":"889ff544-5e69-4c30-8304-a429e4af1b0f","Type":"ContainerDied","Data":"21b2513810dd23a24b382d31036fdfbcee15b516f1d10507dbb2103e03583774"} Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.267418 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5856678bc8-5x7bn" event={"ID":"889ff544-5e69-4c30-8304-a429e4af1b0f","Type":"ContainerDied","Data":"da4ef8199b94fa66d76fe8f22667a8cdfb5ec8779dec48285764a2a50113d61f"} Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.267430 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4ef8199b94fa66d76fe8f22667a8cdfb5ec8779dec48285764a2a50113d61f" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.270286 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerStarted","Data":"aef8c5cf6096aff041c580bb71c77a555c9de81c6d2fdab2ba10ec78bfa44b2d"} Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.272467 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.295184 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.14203619 podStartE2EDuration="4.295166879s" podCreationTimestamp="2025-12-01 03:14:54 +0000 UTC" firstStartedPulling="2025-12-01 03:14:55.278404019 +0000 UTC m=+1124.789658391" lastFinishedPulling="2025-12-01 03:14:57.431534708 +0000 UTC m=+1126.942789080" observedRunningTime="2025-12-01 03:14:58.291572948 +0000 UTC m=+1127.802827330" watchObservedRunningTime="2025-12-01 03:14:58.295166879 +0000 UTC m=+1127.806421251" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.316931 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.481708 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-combined-ca-bundle\") pod \"889ff544-5e69-4c30-8304-a429e4af1b0f\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.482304 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/889ff544-5e69-4c30-8304-a429e4af1b0f-logs\") pod \"889ff544-5e69-4c30-8304-a429e4af1b0f\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.482339 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6dlp\" (UniqueName: \"kubernetes.io/projected/889ff544-5e69-4c30-8304-a429e4af1b0f-kube-api-access-l6dlp\") pod \"889ff544-5e69-4c30-8304-a429e4af1b0f\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.482399 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data-custom\") pod \"889ff544-5e69-4c30-8304-a429e4af1b0f\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.482575 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data\") pod \"889ff544-5e69-4c30-8304-a429e4af1b0f\" (UID: \"889ff544-5e69-4c30-8304-a429e4af1b0f\") " Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.483023 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/889ff544-5e69-4c30-8304-a429e4af1b0f-logs" (OuterVolumeSpecName: "logs") pod "889ff544-5e69-4c30-8304-a429e4af1b0f" (UID: "889ff544-5e69-4c30-8304-a429e4af1b0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.488172 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "889ff544-5e69-4c30-8304-a429e4af1b0f" (UID: "889ff544-5e69-4c30-8304-a429e4af1b0f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.488428 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889ff544-5e69-4c30-8304-a429e4af1b0f-kube-api-access-l6dlp" (OuterVolumeSpecName: "kube-api-access-l6dlp") pod "889ff544-5e69-4c30-8304-a429e4af1b0f" (UID: "889ff544-5e69-4c30-8304-a429e4af1b0f"). InnerVolumeSpecName "kube-api-access-l6dlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.536712 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "889ff544-5e69-4c30-8304-a429e4af1b0f" (UID: "889ff544-5e69-4c30-8304-a429e4af1b0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.538121 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data" (OuterVolumeSpecName: "config-data") pod "889ff544-5e69-4c30-8304-a429e4af1b0f" (UID: "889ff544-5e69-4c30-8304-a429e4af1b0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.583057 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.584105 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.584129 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/889ff544-5e69-4c30-8304-a429e4af1b0f-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.584138 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6dlp\" (UniqueName: \"kubernetes.io/projected/889ff544-5e69-4c30-8304-a429e4af1b0f-kube-api-access-l6dlp\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.584151 4880 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.584161 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/889ff544-5e69-4c30-8304-a429e4af1b0f-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.672435 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-579bf799d7-jfhbz"] Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.672986 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" podUID="47e0e910-3465-4faf-96e3-68c70d730b79" containerName="dnsmasq-dns" containerID="cri-o://8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c" gracePeriod=10 Dec 01 03:14:58 crc kubenswrapper[4880]: I1201 03:14:58.792605 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" podUID="47e0e910-3465-4faf-96e3-68c70d730b79" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: connect: connection refused" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.172457 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.222202 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.247051 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.283225 4880 generic.go:334] "Generic (PLEG): container finished" podID="47e0e910-3465-4faf-96e3-68c70d730b79" containerID="8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c" exitCode=0 Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.283329 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5856678bc8-5x7bn" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.283334 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" event={"ID":"47e0e910-3465-4faf-96e3-68c70d730b79","Type":"ContainerDied","Data":"8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c"} Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.283380 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.283388 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-579bf799d7-jfhbz" event={"ID":"47e0e910-3465-4faf-96e3-68c70d730b79","Type":"ContainerDied","Data":"e0a14e8e5e9a02eebfa5d7870c4ec799c4c6a229f4b36caadda66e202fdbb481"} Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.283411 4880 scope.go:117] "RemoveContainer" containerID="8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.283419 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerName="cinder-scheduler" containerID="cri-o://b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c" gracePeriod=30 Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.283515 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerName="probe" containerID="cri-o://a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925" gracePeriod=30 Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.321778 4880 scope.go:117] "RemoveContainer" containerID="61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.336227 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5856678bc8-5x7bn"] Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.342227 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5856678bc8-5x7bn"] Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.355089 4880 scope.go:117] "RemoveContainer" containerID="8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c" Dec 01 03:14:59 crc kubenswrapper[4880]: E1201 03:14:59.356307 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c\": container with ID starting with 8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c not found: ID does not exist" containerID="8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.356358 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c"} err="failed to get container status \"8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c\": rpc error: code = NotFound desc = could not find container \"8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c\": container with ID starting with 8bcbed1c9b5f99d91875bc89513b44430aa4fff5a3520fa3826710ef50a8036c not found: ID does not exist" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.356380 4880 scope.go:117] "RemoveContainer" containerID="61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008" Dec 01 03:14:59 crc kubenswrapper[4880]: E1201 03:14:59.356717 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008\": container with ID starting with 61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008 not found: ID does not exist" containerID="61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.356740 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008"} err="failed to get container status \"61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008\": rpc error: code = NotFound desc = could not find container \"61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008\": container with ID starting with 61346097a0ef1e475a032bb474a65a3f0db7a8642e276eec65763f8925942008 not found: ID does not exist" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.402581 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-sb\") pod \"47e0e910-3465-4faf-96e3-68c70d730b79\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.403381 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-svc\") pod \"47e0e910-3465-4faf-96e3-68c70d730b79\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.404467 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-nb\") pod \"47e0e910-3465-4faf-96e3-68c70d730b79\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.404549 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-config\") pod \"47e0e910-3465-4faf-96e3-68c70d730b79\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.404593 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km458\" (UniqueName: \"kubernetes.io/projected/47e0e910-3465-4faf-96e3-68c70d730b79-kube-api-access-km458\") pod \"47e0e910-3465-4faf-96e3-68c70d730b79\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.404727 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-swift-storage-0\") pod \"47e0e910-3465-4faf-96e3-68c70d730b79\" (UID: \"47e0e910-3465-4faf-96e3-68c70d730b79\") " Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.416323 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e0e910-3465-4faf-96e3-68c70d730b79-kube-api-access-km458" (OuterVolumeSpecName: "kube-api-access-km458") pod "47e0e910-3465-4faf-96e3-68c70d730b79" (UID: "47e0e910-3465-4faf-96e3-68c70d730b79"). InnerVolumeSpecName "kube-api-access-km458". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.449570 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-config" (OuterVolumeSpecName: "config") pod "47e0e910-3465-4faf-96e3-68c70d730b79" (UID: "47e0e910-3465-4faf-96e3-68c70d730b79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.455478 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47e0e910-3465-4faf-96e3-68c70d730b79" (UID: "47e0e910-3465-4faf-96e3-68c70d730b79"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.456224 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47e0e910-3465-4faf-96e3-68c70d730b79" (UID: "47e0e910-3465-4faf-96e3-68c70d730b79"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.461158 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47e0e910-3465-4faf-96e3-68c70d730b79" (UID: "47e0e910-3465-4faf-96e3-68c70d730b79"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.473237 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "47e0e910-3465-4faf-96e3-68c70d730b79" (UID: "47e0e910-3465-4faf-96e3-68c70d730b79"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.509241 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.509768 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.509843 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.509954 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.510040 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e0e910-3465-4faf-96e3-68c70d730b79-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.510138 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km458\" (UniqueName: \"kubernetes.io/projected/47e0e910-3465-4faf-96e3-68c70d730b79-kube-api-access-km458\") on node \"crc\" DevicePath \"\"" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.646925 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-56cc96959b-rrjz7" Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.676448 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-579bf799d7-jfhbz"] Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.709452 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-579bf799d7-jfhbz"] Dec 01 03:14:59 crc kubenswrapper[4880]: I1201 03:14:59.749034 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6ddc7fc844-5qd9h"] Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.168803 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7"] Dec 01 03:15:00 crc kubenswrapper[4880]: E1201 03:15:00.169219 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169236 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api" Dec 01 03:15:00 crc kubenswrapper[4880]: E1201 03:15:00.169255 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e0e910-3465-4faf-96e3-68c70d730b79" containerName="dnsmasq-dns" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169262 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e0e910-3465-4faf-96e3-68c70d730b79" containerName="dnsmasq-dns" Dec 01 03:15:00 crc kubenswrapper[4880]: E1201 03:15:00.169283 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api-log" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169289 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api-log" Dec 01 03:15:00 crc kubenswrapper[4880]: E1201 03:15:00.169297 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerName="neutron-httpd" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169303 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerName="neutron-httpd" Dec 01 03:15:00 crc kubenswrapper[4880]: E1201 03:15:00.169311 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e0e910-3465-4faf-96e3-68c70d730b79" containerName="init" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169318 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e0e910-3465-4faf-96e3-68c70d730b79" containerName="init" Dec 01 03:15:00 crc kubenswrapper[4880]: E1201 03:15:00.169329 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerName="neutron-api" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169335 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerName="neutron-api" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169481 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169496 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e0e910-3465-4faf-96e3-68c70d730b79" containerName="dnsmasq-dns" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169507 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerName="neutron-httpd" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169517 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f883fbd2-c0ad-4d3e-a56d-c99c361b6439" containerName="neutron-api" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.169531 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" containerName="barbican-api-log" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.170111 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.173211 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.184986 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.192621 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7"] Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.291937 4880 generic.go:334] "Generic (PLEG): container finished" podID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerID="a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925" exitCode=0 Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.292002 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85741d09-57c9-4c00-8a01-70258aae8f7b","Type":"ContainerDied","Data":"a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925"} Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.328404 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f32e230-504f-40c2-8d2d-3add5e3a46d8-secret-volume\") pod \"collect-profiles-29409315-74cz7\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.328631 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2mw\" (UniqueName: \"kubernetes.io/projected/2f32e230-504f-40c2-8d2d-3add5e3a46d8-kube-api-access-ld2mw\") pod \"collect-profiles-29409315-74cz7\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.328712 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f32e230-504f-40c2-8d2d-3add5e3a46d8-config-volume\") pod \"collect-profiles-29409315-74cz7\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.430473 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f32e230-504f-40c2-8d2d-3add5e3a46d8-secret-volume\") pod \"collect-profiles-29409315-74cz7\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.430627 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld2mw\" (UniqueName: \"kubernetes.io/projected/2f32e230-504f-40c2-8d2d-3add5e3a46d8-kube-api-access-ld2mw\") pod \"collect-profiles-29409315-74cz7\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.430665 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f32e230-504f-40c2-8d2d-3add5e3a46d8-config-volume\") pod \"collect-profiles-29409315-74cz7\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.432714 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f32e230-504f-40c2-8d2d-3add5e3a46d8-config-volume\") pod \"collect-profiles-29409315-74cz7\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.437589 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f32e230-504f-40c2-8d2d-3add5e3a46d8-secret-volume\") pod \"collect-profiles-29409315-74cz7\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.452784 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld2mw\" (UniqueName: \"kubernetes.io/projected/2f32e230-504f-40c2-8d2d-3add5e3a46d8-kube-api-access-ld2mw\") pod \"collect-profiles-29409315-74cz7\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.487487 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.818066 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e0e910-3465-4faf-96e3-68c70d730b79" path="/var/lib/kubelet/pods/47e0e910-3465-4faf-96e3-68c70d730b79/volumes" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.818851 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="889ff544-5e69-4c30-8304-a429e4af1b0f" path="/var/lib/kubelet/pods/889ff544-5e69-4c30-8304-a429e4af1b0f/volumes" Dec 01 03:15:00 crc kubenswrapper[4880]: I1201 03:15:00.937483 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7"] Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.125889 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.247324 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data\") pod \"85741d09-57c9-4c00-8a01-70258aae8f7b\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.247702 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-scripts\") pod \"85741d09-57c9-4c00-8a01-70258aae8f7b\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.247754 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85741d09-57c9-4c00-8a01-70258aae8f7b-etc-machine-id\") pod \"85741d09-57c9-4c00-8a01-70258aae8f7b\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.247891 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-combined-ca-bundle\") pod \"85741d09-57c9-4c00-8a01-70258aae8f7b\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.247947 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85741d09-57c9-4c00-8a01-70258aae8f7b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "85741d09-57c9-4c00-8a01-70258aae8f7b" (UID: "85741d09-57c9-4c00-8a01-70258aae8f7b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.247981 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data-custom\") pod \"85741d09-57c9-4c00-8a01-70258aae8f7b\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.248066 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pggj2\" (UniqueName: \"kubernetes.io/projected/85741d09-57c9-4c00-8a01-70258aae8f7b-kube-api-access-pggj2\") pod \"85741d09-57c9-4c00-8a01-70258aae8f7b\" (UID: \"85741d09-57c9-4c00-8a01-70258aae8f7b\") " Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.248716 4880 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/85741d09-57c9-4c00-8a01-70258aae8f7b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.256298 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-scripts" (OuterVolumeSpecName: "scripts") pod "85741d09-57c9-4c00-8a01-70258aae8f7b" (UID: "85741d09-57c9-4c00-8a01-70258aae8f7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.256426 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85741d09-57c9-4c00-8a01-70258aae8f7b-kube-api-access-pggj2" (OuterVolumeSpecName: "kube-api-access-pggj2") pod "85741d09-57c9-4c00-8a01-70258aae8f7b" (UID: "85741d09-57c9-4c00-8a01-70258aae8f7b"). InnerVolumeSpecName "kube-api-access-pggj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.262987 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "85741d09-57c9-4c00-8a01-70258aae8f7b" (UID: "85741d09-57c9-4c00-8a01-70258aae8f7b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.320647 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85741d09-57c9-4c00-8a01-70258aae8f7b" (UID: "85741d09-57c9-4c00-8a01-70258aae8f7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.328116 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" event={"ID":"2f32e230-504f-40c2-8d2d-3add5e3a46d8","Type":"ContainerStarted","Data":"6794b302f110ab047fee2417ed9708a89bd11e2eb2757428a0aeb11078e886a3"} Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.328316 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" event={"ID":"2f32e230-504f-40c2-8d2d-3add5e3a46d8","Type":"ContainerStarted","Data":"68a025bf4a3584899f5c4ba4be2d9ef82fe16eeb4596666d3de92bf90fda7f20"} Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.340566 4880 generic.go:334] "Generic (PLEG): container finished" podID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerID="b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c" exitCode=0 Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.340604 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.340613 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85741d09-57c9-4c00-8a01-70258aae8f7b","Type":"ContainerDied","Data":"b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c"} Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.340642 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"85741d09-57c9-4c00-8a01-70258aae8f7b","Type":"ContainerDied","Data":"60b0909cccc34a3ef9cf69a96358501062bf75e93b326dc6e99c4676d5089811"} Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.340657 4880 scope.go:117] "RemoveContainer" containerID="a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.354776 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.354828 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.354840 4880 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.354851 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pggj2\" (UniqueName: \"kubernetes.io/projected/85741d09-57c9-4c00-8a01-70258aae8f7b-kube-api-access-pggj2\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.357833 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" podStartSLOduration=1.357818258 podStartE2EDuration="1.357818258s" podCreationTimestamp="2025-12-01 03:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:01.353438308 +0000 UTC m=+1130.864692680" watchObservedRunningTime="2025-12-01 03:15:01.357818258 +0000 UTC m=+1130.869072630" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.368828 4880 scope.go:117] "RemoveContainer" containerID="b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.398129 4880 scope.go:117] "RemoveContainer" containerID="a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925" Dec 01 03:15:01 crc kubenswrapper[4880]: E1201 03:15:01.398682 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925\": container with ID starting with a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925 not found: ID does not exist" containerID="a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.398725 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925"} err="failed to get container status \"a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925\": rpc error: code = NotFound desc = could not find container \"a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925\": container with ID starting with a54922becd0bdee67d845e0912c189c74824e4b7afacc2c90881cf4b0fa97925 not found: ID does not exist" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.398754 4880 scope.go:117] "RemoveContainer" containerID="b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c" Dec 01 03:15:01 crc kubenswrapper[4880]: E1201 03:15:01.399218 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c\": container with ID starting with b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c not found: ID does not exist" containerID="b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.399255 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c"} err="failed to get container status \"b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c\": rpc error: code = NotFound desc = could not find container \"b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c\": container with ID starting with b27d33705203138929fed12b8065e0772863715e834ba727f4a4946cb0994c9c not found: ID does not exist" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.400719 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data" (OuterVolumeSpecName: "config-data") pod "85741d09-57c9-4c00-8a01-70258aae8f7b" (UID: "85741d09-57c9-4c00-8a01-70258aae8f7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.455917 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85741d09-57c9-4c00-8a01-70258aae8f7b-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.673066 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.685132 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.699700 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 03:15:01 crc kubenswrapper[4880]: E1201 03:15:01.700048 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerName="cinder-scheduler" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.700064 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerName="cinder-scheduler" Dec 01 03:15:01 crc kubenswrapper[4880]: E1201 03:15:01.700094 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerName="probe" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.700100 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerName="probe" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.700254 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerName="cinder-scheduler" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.700278 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="85741d09-57c9-4c00-8a01-70258aae8f7b" containerName="probe" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.701109 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.704017 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.723531 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.861125 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.861464 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.861502 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75p98\" (UniqueName: \"kubernetes.io/projected/0f82fb04-500d-452a-9fdc-a5d8466952a0-kube-api-access-75p98\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.861540 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.861588 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f82fb04-500d-452a-9fdc-a5d8466952a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.861606 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.882318 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.963643 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75p98\" (UniqueName: \"kubernetes.io/projected/0f82fb04-500d-452a-9fdc-a5d8466952a0-kube-api-access-75p98\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.963717 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.963787 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f82fb04-500d-452a-9fdc-a5d8466952a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.963813 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.963863 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.963952 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.965156 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f82fb04-500d-452a-9fdc-a5d8466952a0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.972207 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-scripts\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.984089 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.984550 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-config-data\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.989394 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75p98\" (UniqueName: \"kubernetes.io/projected/0f82fb04-500d-452a-9fdc-a5d8466952a0-kube-api-access-75p98\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:01 crc kubenswrapper[4880]: I1201 03:15:01.997849 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f82fb04-500d-452a-9fdc-a5d8466952a0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0f82fb04-500d-452a-9fdc-a5d8466952a0\") " pod="openstack/cinder-scheduler-0" Dec 01 03:15:02 crc kubenswrapper[4880]: I1201 03:15:02.027987 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 03:15:02 crc kubenswrapper[4880]: I1201 03:15:02.358579 4880 generic.go:334] "Generic (PLEG): container finished" podID="2f32e230-504f-40c2-8d2d-3add5e3a46d8" containerID="6794b302f110ab047fee2417ed9708a89bd11e2eb2757428a0aeb11078e886a3" exitCode=0 Dec 01 03:15:02 crc kubenswrapper[4880]: I1201 03:15:02.358656 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" event={"ID":"2f32e230-504f-40c2-8d2d-3add5e3a46d8","Type":"ContainerDied","Data":"6794b302f110ab047fee2417ed9708a89bd11e2eb2757428a0aeb11078e886a3"} Dec 01 03:15:02 crc kubenswrapper[4880]: I1201 03:15:02.480008 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 03:15:02 crc kubenswrapper[4880]: I1201 03:15:02.799265 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85741d09-57c9-4c00-8a01-70258aae8f7b" path="/var/lib/kubelet/pods/85741d09-57c9-4c00-8a01-70258aae8f7b/volumes" Dec 01 03:15:03 crc kubenswrapper[4880]: I1201 03:15:03.378713 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f82fb04-500d-452a-9fdc-a5d8466952a0","Type":"ContainerStarted","Data":"9be42ef43bff384da10f784bbca8bb6814e1403413cf834d93819bc688935b64"} Dec 01 03:15:03 crc kubenswrapper[4880]: I1201 03:15:03.378972 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f82fb04-500d-452a-9fdc-a5d8466952a0","Type":"ContainerStarted","Data":"c01972b0a3fc74063a592332f9a4113c2ea361ebcbb06a1b22434808f4874111"} Dec 01 03:15:03 crc kubenswrapper[4880]: I1201 03:15:03.812256 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.012392 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld2mw\" (UniqueName: \"kubernetes.io/projected/2f32e230-504f-40c2-8d2d-3add5e3a46d8-kube-api-access-ld2mw\") pod \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.012494 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f32e230-504f-40c2-8d2d-3add5e3a46d8-secret-volume\") pod \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.012637 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f32e230-504f-40c2-8d2d-3add5e3a46d8-config-volume\") pod \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\" (UID: \"2f32e230-504f-40c2-8d2d-3add5e3a46d8\") " Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.013250 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f32e230-504f-40c2-8d2d-3add5e3a46d8-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f32e230-504f-40c2-8d2d-3add5e3a46d8" (UID: "2f32e230-504f-40c2-8d2d-3add5e3a46d8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.018779 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f32e230-504f-40c2-8d2d-3add5e3a46d8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f32e230-504f-40c2-8d2d-3add5e3a46d8" (UID: "2f32e230-504f-40c2-8d2d-3add5e3a46d8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.031097 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f32e230-504f-40c2-8d2d-3add5e3a46d8-kube-api-access-ld2mw" (OuterVolumeSpecName: "kube-api-access-ld2mw") pod "2f32e230-504f-40c2-8d2d-3add5e3a46d8" (UID: "2f32e230-504f-40c2-8d2d-3add5e3a46d8"). InnerVolumeSpecName "kube-api-access-ld2mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.114032 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld2mw\" (UniqueName: \"kubernetes.io/projected/2f32e230-504f-40c2-8d2d-3add5e3a46d8-kube-api-access-ld2mw\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.114071 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f32e230-504f-40c2-8d2d-3add5e3a46d8-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.114081 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f32e230-504f-40c2-8d2d-3add5e3a46d8-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.393286 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f82fb04-500d-452a-9fdc-a5d8466952a0","Type":"ContainerStarted","Data":"a9f9f47712f590ba435613965622c4f1142af56cd00ab9b6185ffcf3cd0898db"} Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.395921 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" event={"ID":"2f32e230-504f-40c2-8d2d-3add5e3a46d8","Type":"ContainerDied","Data":"68a025bf4a3584899f5c4ba4be2d9ef82fe16eeb4596666d3de92bf90fda7f20"} Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.395946 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a025bf4a3584899f5c4ba4be2d9ef82fe16eeb4596666d3de92bf90fda7f20" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.395983 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.418364 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.418344624 podStartE2EDuration="3.418344624s" podCreationTimestamp="2025-12-01 03:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:04.417604416 +0000 UTC m=+1133.928858788" watchObservedRunningTime="2025-12-01 03:15:04.418344624 +0000 UTC m=+1133.929598996" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.735426 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:15:04 crc kubenswrapper[4880]: I1201 03:15:04.757035 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d55d9c58d-c2xlp" Dec 01 03:15:05 crc kubenswrapper[4880]: I1201 03:15:05.330371 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5c48bf866c-6nsdn" Dec 01 03:15:07 crc kubenswrapper[4880]: I1201 03:15:07.028893 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.096727 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 01 03:15:09 crc kubenswrapper[4880]: E1201 03:15:09.097345 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f32e230-504f-40c2-8d2d-3add5e3a46d8" containerName="collect-profiles" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.097357 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f32e230-504f-40c2-8d2d-3add5e3a46d8" containerName="collect-profiles" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.097537 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f32e230-504f-40c2-8d2d-3add5e3a46d8" containerName="collect-profiles" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.098122 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.100417 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.100504 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-krqbh" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.100566 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.111168 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.201846 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c8ef62e-f8bf-4982-9c80-9a52bb538621-openstack-config-secret\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.201906 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkzql\" (UniqueName: \"kubernetes.io/projected/5c8ef62e-f8bf-4982-9c80-9a52bb538621-kube-api-access-hkzql\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.201939 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8ef62e-f8bf-4982-9c80-9a52bb538621-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.201980 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c8ef62e-f8bf-4982-9c80-9a52bb538621-openstack-config\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.306826 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c8ef62e-f8bf-4982-9c80-9a52bb538621-openstack-config-secret\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.307099 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkzql\" (UniqueName: \"kubernetes.io/projected/5c8ef62e-f8bf-4982-9c80-9a52bb538621-kube-api-access-hkzql\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.307188 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8ef62e-f8bf-4982-9c80-9a52bb538621-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.307283 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c8ef62e-f8bf-4982-9c80-9a52bb538621-openstack-config\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.308183 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5c8ef62e-f8bf-4982-9c80-9a52bb538621-openstack-config\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.334672 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkzql\" (UniqueName: \"kubernetes.io/projected/5c8ef62e-f8bf-4982-9c80-9a52bb538621-kube-api-access-hkzql\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.339671 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5c8ef62e-f8bf-4982-9c80-9a52bb538621-openstack-config-secret\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.343501 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c8ef62e-f8bf-4982-9c80-9a52bb538621-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5c8ef62e-f8bf-4982-9c80-9a52bb538621\") " pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.417299 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 01 03:15:09 crc kubenswrapper[4880]: I1201 03:15:09.928157 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.434575 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-657545ccb7-km728"] Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.439721 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.446811 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.447098 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.447235 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-jclgj" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.486647 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5c8ef62e-f8bf-4982-9c80-9a52bb538621","Type":"ContainerStarted","Data":"23bf5ca206b16f6b479149821e5c5cb6a4a5d6e8013ae7bfd3c700458c8d4ea7"} Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.496391 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-657545ccb7-km728"] Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.530805 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-combined-ca-bundle\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.531060 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvf6m\" (UniqueName: \"kubernetes.io/projected/f3ed19fa-d784-48ed-8770-35c150a1a24e-kube-api-access-cvf6m\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.531374 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.531463 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data-custom\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.582944 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-558d57d895-k4fk5"] Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.584736 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.591769 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.612299 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-558d57d895-k4fk5"] Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.620431 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86459544c9-nrq5w"] Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.621968 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.639477 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.639526 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data-custom\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.639583 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-combined-ca-bundle\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.639603 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvf6m\" (UniqueName: \"kubernetes.io/projected/f3ed19fa-d784-48ed-8770-35c150a1a24e-kube-api-access-cvf6m\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.656543 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86459544c9-nrq5w"] Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.663232 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.667371 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data-custom\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.687781 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvf6m\" (UniqueName: \"kubernetes.io/projected/f3ed19fa-d784-48ed-8770-35c150a1a24e-kube-api-access-cvf6m\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.689736 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-combined-ca-bundle\") pod \"heat-engine-657545ccb7-km728\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.740688 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-sb\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.740741 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data-custom\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.740763 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddt9m\" (UniqueName: \"kubernetes.io/projected/ba0767c5-9152-431e-b924-05ccd6875e08-kube-api-access-ddt9m\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.740790 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-nb\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.740810 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-combined-ca-bundle\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.740858 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9bt5\" (UniqueName: \"kubernetes.io/projected/cb1b1ac3-2d06-47e5-be03-dca35c8605be-kube-api-access-g9bt5\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.740901 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-swift-storage-0\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.740917 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-config\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.740949 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-svc\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.741018 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.763527 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-67cbdfff6f-zbn4x"] Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.764546 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.774277 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.776014 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-67cbdfff6f-zbn4x"] Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.798254 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-jclgj" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.798487 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.851740 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9bt5\" (UniqueName: \"kubernetes.io/projected/cb1b1ac3-2d06-47e5-be03-dca35c8605be-kube-api-access-g9bt5\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.851787 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-swift-storage-0\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.851807 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-config\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.851836 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-svc\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.851908 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-combined-ca-bundle\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.851957 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjkn\" (UniqueName: \"kubernetes.io/projected/6d2263fe-480f-439e-8367-06dd063f952e-kube-api-access-mgjkn\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.851989 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data-custom\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.852014 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.852038 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-sb\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.852071 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data-custom\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.852086 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddt9m\" (UniqueName: \"kubernetes.io/projected/ba0767c5-9152-431e-b924-05ccd6875e08-kube-api-access-ddt9m\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.852104 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.852125 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-nb\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.852146 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-combined-ca-bundle\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.860266 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-swift-storage-0\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.860778 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-config\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.861294 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-svc\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.862686 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-sb\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.863553 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-nb\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.867695 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-combined-ca-bundle\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.874537 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.876256 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.881894 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddt9m\" (UniqueName: \"kubernetes.io/projected/ba0767c5-9152-431e-b924-05ccd6875e08-kube-api-access-ddt9m\") pod \"dnsmasq-dns-86459544c9-nrq5w\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.891309 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data-custom\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.899803 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9bt5\" (UniqueName: \"kubernetes.io/projected/cb1b1ac3-2d06-47e5-be03-dca35c8605be-kube-api-access-g9bt5\") pod \"heat-cfnapi-558d57d895-k4fk5\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.927438 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.953410 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjkn\" (UniqueName: \"kubernetes.io/projected/6d2263fe-480f-439e-8367-06dd063f952e-kube-api-access-mgjkn\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.953464 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data-custom\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.953522 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.953618 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-combined-ca-bundle\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.958449 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.960628 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data-custom\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.961184 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-combined-ca-bundle\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:10 crc kubenswrapper[4880]: I1201 03:15:10.991735 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjkn\" (UniqueName: \"kubernetes.io/projected/6d2263fe-480f-439e-8367-06dd063f952e-kube-api-access-mgjkn\") pod \"heat-api-67cbdfff6f-zbn4x\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:11 crc kubenswrapper[4880]: I1201 03:15:11.042257 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:11 crc kubenswrapper[4880]: I1201 03:15:11.093996 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:11 crc kubenswrapper[4880]: I1201 03:15:11.378688 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-657545ccb7-km728"] Dec 01 03:15:11 crc kubenswrapper[4880]: I1201 03:15:11.510726 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-657545ccb7-km728" event={"ID":"f3ed19fa-d784-48ed-8770-35c150a1a24e","Type":"ContainerStarted","Data":"fb9a35fde5b66eb3e5475553005e1360f925f05882505e14ac64144e32439f61"} Dec 01 03:15:11 crc kubenswrapper[4880]: I1201 03:15:11.528889 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-558d57d895-k4fk5"] Dec 01 03:15:11 crc kubenswrapper[4880]: I1201 03:15:11.659766 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-67cbdfff6f-zbn4x"] Dec 01 03:15:11 crc kubenswrapper[4880]: I1201 03:15:11.678927 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86459544c9-nrq5w"] Dec 01 03:15:11 crc kubenswrapper[4880]: W1201 03:15:11.761475 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d2263fe_480f_439e_8367_06dd063f952e.slice/crio-f7fc1539e6555183df13697f1f2f6cbf0a31f2c71571ffc7c91e5acf665cd2a4 WatchSource:0}: Error finding container f7fc1539e6555183df13697f1f2f6cbf0a31f2c71571ffc7c91e5acf665cd2a4: Status 404 returned error can't find the container with id f7fc1539e6555183df13697f1f2f6cbf0a31f2c71571ffc7c91e5acf665cd2a4 Dec 01 03:15:12 crc kubenswrapper[4880]: I1201 03:15:12.366119 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 01 03:15:12 crc kubenswrapper[4880]: I1201 03:15:12.522094 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-558d57d895-k4fk5" event={"ID":"cb1b1ac3-2d06-47e5-be03-dca35c8605be","Type":"ContainerStarted","Data":"99a2b2ccd94094c69ea8007a71b96c917487d55a87d44fab7e1584db2c064149"} Dec 01 03:15:12 crc kubenswrapper[4880]: I1201 03:15:12.524385 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67cbdfff6f-zbn4x" event={"ID":"6d2263fe-480f-439e-8367-06dd063f952e","Type":"ContainerStarted","Data":"f7fc1539e6555183df13697f1f2f6cbf0a31f2c71571ffc7c91e5acf665cd2a4"} Dec 01 03:15:12 crc kubenswrapper[4880]: I1201 03:15:12.526264 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-657545ccb7-km728" event={"ID":"f3ed19fa-d784-48ed-8770-35c150a1a24e","Type":"ContainerStarted","Data":"a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359"} Dec 01 03:15:12 crc kubenswrapper[4880]: I1201 03:15:12.526989 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:12 crc kubenswrapper[4880]: I1201 03:15:12.528936 4880 generic.go:334] "Generic (PLEG): container finished" podID="ba0767c5-9152-431e-b924-05ccd6875e08" containerID="b06c5124332cd92c00aeb5ee5b76fc68ab61976bb781f8c32502e65feaf8c508" exitCode=0 Dec 01 03:15:12 crc kubenswrapper[4880]: I1201 03:15:12.528973 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" event={"ID":"ba0767c5-9152-431e-b924-05ccd6875e08","Type":"ContainerDied","Data":"b06c5124332cd92c00aeb5ee5b76fc68ab61976bb781f8c32502e65feaf8c508"} Dec 01 03:15:12 crc kubenswrapper[4880]: I1201 03:15:12.528994 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" event={"ID":"ba0767c5-9152-431e-b924-05ccd6875e08","Type":"ContainerStarted","Data":"8ad2dfcace0580eb227ac718f406d7d31a96c1c3af447e8299c3545cf9e592c8"} Dec 01 03:15:12 crc kubenswrapper[4880]: I1201 03:15:12.667128 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-657545ccb7-km728" podStartSLOduration=2.667106074 podStartE2EDuration="2.667106074s" podCreationTimestamp="2025-12-01 03:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:12.568564835 +0000 UTC m=+1142.079819207" watchObservedRunningTime="2025-12-01 03:15:12.667106074 +0000 UTC m=+1142.178360446" Dec 01 03:15:13 crc kubenswrapper[4880]: I1201 03:15:13.542120 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" event={"ID":"ba0767c5-9152-431e-b924-05ccd6875e08","Type":"ContainerStarted","Data":"e4ad5d2bbbea602d8a47588e6e54db41d5c9cd8ed843886bb586f20742c77c96"} Dec 01 03:15:13 crc kubenswrapper[4880]: I1201 03:15:13.542604 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:13 crc kubenswrapper[4880]: I1201 03:15:13.568285 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" podStartSLOduration=3.568265323 podStartE2EDuration="3.568265323s" podCreationTimestamp="2025-12-01 03:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:13.560562798 +0000 UTC m=+1143.071817180" watchObservedRunningTime="2025-12-01 03:15:13.568265323 +0000 UTC m=+1143.079519695" Dec 01 03:15:16 crc kubenswrapper[4880]: I1201 03:15:16.574860 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-558d57d895-k4fk5" event={"ID":"cb1b1ac3-2d06-47e5-be03-dca35c8605be","Type":"ContainerStarted","Data":"051043c9ec951382d04197f31f174423dce8aa44329af96960966d39e488e0d5"} Dec 01 03:15:16 crc kubenswrapper[4880]: I1201 03:15:16.575295 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:16 crc kubenswrapper[4880]: I1201 03:15:16.578532 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67cbdfff6f-zbn4x" event={"ID":"6d2263fe-480f-439e-8367-06dd063f952e","Type":"ContainerStarted","Data":"a4e792d60a1d7e5e89d0d3dae093cee383fa962a21058cb05d426402d90cd639"} Dec 01 03:15:16 crc kubenswrapper[4880]: I1201 03:15:16.578748 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:16 crc kubenswrapper[4880]: I1201 03:15:16.599336 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-558d57d895-k4fk5" podStartSLOduration=2.826987384 podStartE2EDuration="6.599314934s" podCreationTimestamp="2025-12-01 03:15:10 +0000 UTC" firstStartedPulling="2025-12-01 03:15:11.557553641 +0000 UTC m=+1141.068808013" lastFinishedPulling="2025-12-01 03:15:15.329881191 +0000 UTC m=+1144.841135563" observedRunningTime="2025-12-01 03:15:16.587343782 +0000 UTC m=+1146.098598154" watchObservedRunningTime="2025-12-01 03:15:16.599314934 +0000 UTC m=+1146.110569306" Dec 01 03:15:16 crc kubenswrapper[4880]: I1201 03:15:16.615416 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-67cbdfff6f-zbn4x" podStartSLOduration=3.049935647 podStartE2EDuration="6.615399781s" podCreationTimestamp="2025-12-01 03:15:10 +0000 UTC" firstStartedPulling="2025-12-01 03:15:11.767943076 +0000 UTC m=+1141.279197448" lastFinishedPulling="2025-12-01 03:15:15.33340722 +0000 UTC m=+1144.844661582" observedRunningTime="2025-12-01 03:15:16.614167939 +0000 UTC m=+1146.125422321" watchObservedRunningTime="2025-12-01 03:15:16.615399781 +0000 UTC m=+1146.126654153" Dec 01 03:15:17 crc kubenswrapper[4880]: I1201 03:15:17.369241 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:15:17 crc kubenswrapper[4880]: I1201 03:15:17.369299 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:15:17 crc kubenswrapper[4880]: I1201 03:15:17.369344 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:15:17 crc kubenswrapper[4880]: I1201 03:15:17.370084 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34d41201e834b41f2c5149b0278e08d421cef1c0ed99b101f5ffb45ff209ff57"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:15:17 crc kubenswrapper[4880]: I1201 03:15:17.370137 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://34d41201e834b41f2c5149b0278e08d421cef1c0ed99b101f5ffb45ff209ff57" gracePeriod=600 Dec 01 03:15:17 crc kubenswrapper[4880]: I1201 03:15:17.590113 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="34d41201e834b41f2c5149b0278e08d421cef1c0ed99b101f5ffb45ff209ff57" exitCode=0 Dec 01 03:15:17 crc kubenswrapper[4880]: I1201 03:15:17.590194 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"34d41201e834b41f2c5149b0278e08d421cef1c0ed99b101f5ffb45ff209ff57"} Dec 01 03:15:17 crc kubenswrapper[4880]: I1201 03:15:17.590550 4880 scope.go:117] "RemoveContainer" containerID="2d75a52daa0e2a7f2599a1e892312d328b61520f98232bcc7cdb455390b50937" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.094520 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5df6945c99-72tng"] Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.096605 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.120110 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5df6945c99-72tng"] Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.147384 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-689598d56f-hm2sf"] Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.148455 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.164638 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-f9df6f645-phc82"] Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.165729 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.191971 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-f9df6f645-phc82"] Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.228209 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-689598d56f-hm2sf"] Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.250525 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data-custom\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.250569 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/697faf05-a750-48e7-be79-d66e35720ef0-config-data-custom\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.250592 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-combined-ca-bundle\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.250613 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.250637 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-combined-ca-bundle\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.250672 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2km9\" (UniqueName: \"kubernetes.io/projected/697faf05-a750-48e7-be79-d66e35720ef0-kube-api-access-p2km9\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.250841 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697faf05-a750-48e7-be79-d66e35720ef0-combined-ca-bundle\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.250995 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7v6x\" (UniqueName: \"kubernetes.io/projected/2b6fa45f-b959-4fae-958d-06f32307b7d7-kube-api-access-p7v6x\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.251047 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnpwd\" (UniqueName: \"kubernetes.io/projected/84f52e98-a6f5-4212-bff7-6980ee04ddaa-kube-api-access-vnpwd\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.251098 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data-custom\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.251118 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697faf05-a750-48e7-be79-d66e35720ef0-config-data\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.251209 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352448 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnpwd\" (UniqueName: \"kubernetes.io/projected/84f52e98-a6f5-4212-bff7-6980ee04ddaa-kube-api-access-vnpwd\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352505 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697faf05-a750-48e7-be79-d66e35720ef0-config-data\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352529 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data-custom\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352571 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352614 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data-custom\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352631 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/697faf05-a750-48e7-be79-d66e35720ef0-config-data-custom\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352646 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-combined-ca-bundle\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352665 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352684 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-combined-ca-bundle\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352716 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2km9\" (UniqueName: \"kubernetes.io/projected/697faf05-a750-48e7-be79-d66e35720ef0-kube-api-access-p2km9\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352742 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697faf05-a750-48e7-be79-d66e35720ef0-combined-ca-bundle\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.352789 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7v6x\" (UniqueName: \"kubernetes.io/projected/2b6fa45f-b959-4fae-958d-06f32307b7d7-kube-api-access-p7v6x\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.363414 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697faf05-a750-48e7-be79-d66e35720ef0-combined-ca-bundle\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.368020 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/697faf05-a750-48e7-be79-d66e35720ef0-config-data-custom\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.369292 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data-custom\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.370197 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.371021 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/697faf05-a750-48e7-be79-d66e35720ef0-config-data\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.371555 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data-custom\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.377731 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnpwd\" (UniqueName: \"kubernetes.io/projected/84f52e98-a6f5-4212-bff7-6980ee04ddaa-kube-api-access-vnpwd\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.383228 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-combined-ca-bundle\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.384990 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2km9\" (UniqueName: \"kubernetes.io/projected/697faf05-a750-48e7-be79-d66e35720ef0-kube-api-access-p2km9\") pod \"heat-engine-5df6945c99-72tng\" (UID: \"697faf05-a750-48e7-be79-d66e35720ef0\") " pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.385919 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data\") pod \"heat-cfnapi-f9df6f645-phc82\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.388952 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-combined-ca-bundle\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.393858 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7v6x\" (UniqueName: \"kubernetes.io/projected/2b6fa45f-b959-4fae-958d-06f32307b7d7-kube-api-access-p7v6x\") pod \"heat-api-689598d56f-hm2sf\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.428703 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.465270 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:19 crc kubenswrapper[4880]: I1201 03:15:19.479362 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.906399 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-67cbdfff6f-zbn4x"] Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.906808 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-67cbdfff6f-zbn4x" podUID="6d2263fe-480f-439e-8367-06dd063f952e" containerName="heat-api" containerID="cri-o://a4e792d60a1d7e5e89d0d3dae093cee383fa962a21058cb05d426402d90cd639" gracePeriod=60 Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.933957 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-67cbdfff6f-zbn4x" podUID="6d2263fe-480f-439e-8367-06dd063f952e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.173:8004/healthcheck\": EOF" Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.949276 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-558d57d895-k4fk5"] Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.949463 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-558d57d895-k4fk5" podUID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" containerName="heat-cfnapi" containerID="cri-o://051043c9ec951382d04197f31f174423dce8aa44329af96960966d39e488e0d5" gracePeriod=60 Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.961191 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-558d57d895-k4fk5" podUID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.171:8000/healthcheck\": EOF" Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.961646 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-558d57d895-k4fk5" podUID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.171:8000/healthcheck\": EOF" Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.963923 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-56687d89b8-gsngq"] Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.964976 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.969548 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.979138 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Dec 01 03:15:20 crc kubenswrapper[4880]: I1201 03:15:20.985620 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-56687d89b8-gsngq"] Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.027672 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-558d57d895-k4fk5" podUID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.171:8000/healthcheck\": EOF" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.047017 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.095813 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4f7b\" (UniqueName: \"kubernetes.io/projected/d654d580-4832-43cf-b7b9-e91cca241869-kube-api-access-q4f7b\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.096016 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-config-data\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.096051 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-combined-ca-bundle\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.096154 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-internal-tls-certs\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.096220 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-public-tls-certs\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.096266 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-config-data-custom\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.129775 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-789ffb5f5c-5pxbw"] Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.131205 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.135410 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.136119 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.142709 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-789ffb5f5c-5pxbw"] Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.187284 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-585499bb75-ggpgg"] Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.187575 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" podUID="22261634-3cd2-4faf-9264-9234fa4b43ca" containerName="dnsmasq-dns" containerID="cri-o://97758e157c7df9f6d3d14bcd987018f39501d94a77d192c407c6b4b9dc126063" gracePeriod=10 Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.205623 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-config-data\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.205680 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-combined-ca-bundle\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.205735 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-config-data-custom\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.205756 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-internal-tls-certs\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.205782 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-internal-tls-certs\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.205852 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-public-tls-certs\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.210070 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-config-data-custom\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.210122 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-public-tls-certs\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.210156 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-combined-ca-bundle\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.210243 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-config-data\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.210279 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4f7b\" (UniqueName: \"kubernetes.io/projected/d654d580-4832-43cf-b7b9-e91cca241869-kube-api-access-q4f7b\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.210313 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7n7\" (UniqueName: \"kubernetes.io/projected/de08a81c-8cfb-4116-8f93-25192b8f205e-kube-api-access-zk7n7\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.224310 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-config-data-custom\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.224814 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-config-data\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.228787 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-combined-ca-bundle\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.252306 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-public-tls-certs\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.253086 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4f7b\" (UniqueName: \"kubernetes.io/projected/d654d580-4832-43cf-b7b9-e91cca241869-kube-api-access-q4f7b\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.255044 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d654d580-4832-43cf-b7b9-e91cca241869-internal-tls-certs\") pod \"heat-api-56687d89b8-gsngq\" (UID: \"d654d580-4832-43cf-b7b9-e91cca241869\") " pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.313987 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-public-tls-certs\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.314027 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-combined-ca-bundle\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.314077 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-config-data\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.314105 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk7n7\" (UniqueName: \"kubernetes.io/projected/de08a81c-8cfb-4116-8f93-25192b8f205e-kube-api-access-zk7n7\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.314181 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-config-data-custom\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.314210 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-internal-tls-certs\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.320890 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-config-data\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.326412 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-public-tls-certs\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.326970 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-internal-tls-certs\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.341425 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk7n7\" (UniqueName: \"kubernetes.io/projected/de08a81c-8cfb-4116-8f93-25192b8f205e-kube-api-access-zk7n7\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.345160 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.363602 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-combined-ca-bundle\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.365664 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de08a81c-8cfb-4116-8f93-25192b8f205e-config-data-custom\") pod \"heat-cfnapi-789ffb5f5c-5pxbw\" (UID: \"de08a81c-8cfb-4116-8f93-25192b8f205e\") " pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.469794 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.640250 4880 generic.go:334] "Generic (PLEG): container finished" podID="22261634-3cd2-4faf-9264-9234fa4b43ca" containerID="97758e157c7df9f6d3d14bcd987018f39501d94a77d192c407c6b4b9dc126063" exitCode=0 Dec 01 03:15:21 crc kubenswrapper[4880]: I1201 03:15:21.640290 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" event={"ID":"22261634-3cd2-4faf-9264-9234fa4b43ca","Type":"ContainerDied","Data":"97758e157c7df9f6d3d14bcd987018f39501d94a77d192c407c6b4b9dc126063"} Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.273425 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7467d8cff5-62dbn"] Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.284680 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.288689 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.288835 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.288949 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.291112 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7467d8cff5-62dbn"] Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.439324 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-config-data\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.439419 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345c72ac-f2df-430d-8a61-9416bdda67a9-run-httpd\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.439510 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-public-tls-certs\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.439553 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-combined-ca-bundle\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.439610 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpz5d\" (UniqueName: \"kubernetes.io/projected/345c72ac-f2df-430d-8a61-9416bdda67a9-kube-api-access-fpz5d\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.439652 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345c72ac-f2df-430d-8a61-9416bdda67a9-log-httpd\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.439682 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-internal-tls-certs\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.439710 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/345c72ac-f2df-430d-8a61-9416bdda67a9-etc-swift\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.541191 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-combined-ca-bundle\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.541257 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpz5d\" (UniqueName: \"kubernetes.io/projected/345c72ac-f2df-430d-8a61-9416bdda67a9-kube-api-access-fpz5d\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.541298 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345c72ac-f2df-430d-8a61-9416bdda67a9-log-httpd\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.541328 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-internal-tls-certs\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.541354 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/345c72ac-f2df-430d-8a61-9416bdda67a9-etc-swift\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.541384 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-config-data\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.541419 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345c72ac-f2df-430d-8a61-9416bdda67a9-run-httpd\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.541481 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-public-tls-certs\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.543002 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345c72ac-f2df-430d-8a61-9416bdda67a9-log-httpd\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.546288 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/345c72ac-f2df-430d-8a61-9416bdda67a9-run-httpd\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.566584 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-internal-tls-certs\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.567445 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-combined-ca-bundle\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.614797 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpz5d\" (UniqueName: \"kubernetes.io/projected/345c72ac-f2df-430d-8a61-9416bdda67a9-kube-api-access-fpz5d\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.615574 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-public-tls-certs\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.616120 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/345c72ac-f2df-430d-8a61-9416bdda67a9-etc-swift\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.623126 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345c72ac-f2df-430d-8a61-9416bdda67a9-config-data\") pod \"swift-proxy-7467d8cff5-62dbn\" (UID: \"345c72ac-f2df-430d-8a61-9416bdda67a9\") " pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:22 crc kubenswrapper[4880]: I1201 03:15:22.911218 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:23 crc kubenswrapper[4880]: I1201 03:15:23.582201 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" podUID="22261634-3cd2-4faf-9264-9234fa4b43ca" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: connect: connection refused" Dec 01 03:15:23 crc kubenswrapper[4880]: I1201 03:15:23.679451 4880 generic.go:334] "Generic (PLEG): container finished" podID="ad61e418-7135-4561-af80-28a601030e3b" containerID="dd3de9c28a81ad0cc2e5d47aa8e0ff0cceff227768670c142654202d23e105f0" exitCode=137 Dec 01 03:15:23 crc kubenswrapper[4880]: I1201 03:15:23.679490 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad61e418-7135-4561-af80-28a601030e3b","Type":"ContainerDied","Data":"dd3de9c28a81ad0cc2e5d47aa8e0ff0cceff227768670c142654202d23e105f0"} Dec 01 03:15:23 crc kubenswrapper[4880]: I1201 03:15:23.984177 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ad61e418-7135-4561-af80-28a601030e3b" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.165:8776/healthcheck\": dial tcp 10.217.0.165:8776: connect: connection refused" Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.501704 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.502032 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="ceilometer-central-agent" containerID="cri-o://189cc505df6e96013da01bc755f07d64f4c74133b749d59a5cc99cf77a90114c" gracePeriod=30 Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.502098 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="sg-core" containerID="cri-o://427c50ec3e3d1bbecb6fd33a4262e4ddb3f2d8ee7662921c2d8b684f98bfa43c" gracePeriod=30 Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.502141 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="ceilometer-notification-agent" containerID="cri-o://a9d7e883ebd01133d3eb1c141114d86ecdd1ba6fce788be2ba85f5873c0ce8aa" gracePeriod=30 Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.502235 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="proxy-httpd" containerID="cri-o://aef8c5cf6096aff041c580bb71c77a555c9de81c6d2fdab2ba10ec78bfa44b2d" gracePeriod=30 Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.606173 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": read tcp 10.217.0.2:56948->10.217.0.166:3000: read: connection reset by peer" Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.681565 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": dial tcp 10.217.0.166:3000: connect: connection refused" Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.690945 4880 generic.go:334] "Generic (PLEG): container finished" podID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerID="aef8c5cf6096aff041c580bb71c77a555c9de81c6d2fdab2ba10ec78bfa44b2d" exitCode=0 Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.690976 4880 generic.go:334] "Generic (PLEG): container finished" podID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerID="427c50ec3e3d1bbecb6fd33a4262e4ddb3f2d8ee7662921c2d8b684f98bfa43c" exitCode=2 Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.690993 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerDied","Data":"aef8c5cf6096aff041c580bb71c77a555c9de81c6d2fdab2ba10ec78bfa44b2d"} Dec 01 03:15:24 crc kubenswrapper[4880]: I1201 03:15:24.691017 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerDied","Data":"427c50ec3e3d1bbecb6fd33a4262e4ddb3f2d8ee7662921c2d8b684f98bfa43c"} Dec 01 03:15:25 crc kubenswrapper[4880]: I1201 03:15:25.702347 4880 generic.go:334] "Generic (PLEG): container finished" podID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerID="189cc505df6e96013da01bc755f07d64f4c74133b749d59a5cc99cf77a90114c" exitCode=0 Dec 01 03:15:25 crc kubenswrapper[4880]: I1201 03:15:25.702410 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerDied","Data":"189cc505df6e96013da01bc755f07d64f4c74133b749d59a5cc99cf77a90114c"} Dec 01 03:15:25 crc kubenswrapper[4880]: I1201 03:15:25.705060 4880 generic.go:334] "Generic (PLEG): container finished" podID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerID="ca4abb4a90b26185324b9145545abeafcf27374b78455ab6064a09cf34a460ca" exitCode=137 Dec 01 03:15:25 crc kubenswrapper[4880]: I1201 03:15:25.705097 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ddc7fc844-5qd9h" event={"ID":"182db9c6-4756-4acb-a228-a1fe3fe7a4dd","Type":"ContainerDied","Data":"ca4abb4a90b26185324b9145545abeafcf27374b78455ab6064a09cf34a460ca"} Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.432041 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-67cbdfff6f-zbn4x" podUID="6d2263fe-480f-439e-8367-06dd063f952e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.173:8004/healthcheck\": read tcp 10.217.0.2:59576->10.217.0.173:8004: read: connection reset by peer" Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.434402 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-67cbdfff6f-zbn4x" podUID="6d2263fe-480f-439e-8367-06dd063f952e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.173:8004/healthcheck\": dial tcp 10.217.0.173:8004: connect: connection refused" Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.458656 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-558d57d895-k4fk5" podUID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.171:8000/healthcheck\": read tcp 10.217.0.2:38332->10.217.0.171:8000: read: connection reset by peer" Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.636517 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.719785 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-config\") pod \"22261634-3cd2-4faf-9264-9234fa4b43ca\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.719950 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-svc\") pod \"22261634-3cd2-4faf-9264-9234fa4b43ca\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.720043 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-sb\") pod \"22261634-3cd2-4faf-9264-9234fa4b43ca\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.720090 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-swift-storage-0\") pod \"22261634-3cd2-4faf-9264-9234fa4b43ca\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.720146 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-nb\") pod \"22261634-3cd2-4faf-9264-9234fa4b43ca\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.720179 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw27t\" (UniqueName: \"kubernetes.io/projected/22261634-3cd2-4faf-9264-9234fa4b43ca-kube-api-access-jw27t\") pod \"22261634-3cd2-4faf-9264-9234fa4b43ca\" (UID: \"22261634-3cd2-4faf-9264-9234fa4b43ca\") " Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.734488 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22261634-3cd2-4faf-9264-9234fa4b43ca-kube-api-access-jw27t" (OuterVolumeSpecName: "kube-api-access-jw27t") pod "22261634-3cd2-4faf-9264-9234fa4b43ca" (UID: "22261634-3cd2-4faf-9264-9234fa4b43ca"). InnerVolumeSpecName "kube-api-access-jw27t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.802445 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.822603 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw27t\" (UniqueName: \"kubernetes.io/projected/22261634-3cd2-4faf-9264-9234fa4b43ca-kube-api-access-jw27t\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.847573 4880 generic.go:334] "Generic (PLEG): container finished" podID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" containerID="051043c9ec951382d04197f31f174423dce8aa44329af96960966d39e488e0d5" exitCode=0 Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.850105 4880 generic.go:334] "Generic (PLEG): container finished" podID="6d2263fe-480f-439e-8367-06dd063f952e" containerID="a4e792d60a1d7e5e89d0d3dae093cee383fa962a21058cb05d426402d90cd639" exitCode=0 Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.857762 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-585499bb75-ggpgg" event={"ID":"22261634-3cd2-4faf-9264-9234fa4b43ca","Type":"ContainerDied","Data":"855cb176751d1a94c963a6825030682c0caef8a9a068be7509712459bcde4b21"} Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.857803 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-558d57d895-k4fk5" event={"ID":"cb1b1ac3-2d06-47e5-be03-dca35c8605be","Type":"ContainerDied","Data":"051043c9ec951382d04197f31f174423dce8aa44329af96960966d39e488e0d5"} Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.857821 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67cbdfff6f-zbn4x" event={"ID":"6d2263fe-480f-439e-8367-06dd063f952e","Type":"ContainerDied","Data":"a4e792d60a1d7e5e89d0d3dae093cee383fa962a21058cb05d426402d90cd639"} Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.857842 4880 scope.go:117] "RemoveContainer" containerID="97758e157c7df9f6d3d14bcd987018f39501d94a77d192c407c6b4b9dc126063" Dec 01 03:15:26 crc kubenswrapper[4880]: I1201 03:15:26.956395 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.012404 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22261634-3cd2-4faf-9264-9234fa4b43ca" (UID: "22261634-3cd2-4faf-9264-9234fa4b43ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.032046 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-scripts\") pod \"ad61e418-7135-4561-af80-28a601030e3b\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.032184 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data-custom\") pod \"ad61e418-7135-4561-af80-28a601030e3b\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.032276 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data\") pod \"ad61e418-7135-4561-af80-28a601030e3b\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.032340 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98j47\" (UniqueName: \"kubernetes.io/projected/ad61e418-7135-4561-af80-28a601030e3b-kube-api-access-98j47\") pod \"ad61e418-7135-4561-af80-28a601030e3b\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.032398 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad61e418-7135-4561-af80-28a601030e3b-logs\") pod \"ad61e418-7135-4561-af80-28a601030e3b\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.032438 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad61e418-7135-4561-af80-28a601030e3b-etc-machine-id\") pod \"ad61e418-7135-4561-af80-28a601030e3b\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.032528 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-combined-ca-bundle\") pod \"ad61e418-7135-4561-af80-28a601030e3b\" (UID: \"ad61e418-7135-4561-af80-28a601030e3b\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.042507 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "22261634-3cd2-4faf-9264-9234fa4b43ca" (UID: "22261634-3cd2-4faf-9264-9234fa4b43ca"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.043824 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad61e418-7135-4561-af80-28a601030e3b-logs" (OuterVolumeSpecName: "logs") pod "ad61e418-7135-4561-af80-28a601030e3b" (UID: "ad61e418-7135-4561-af80-28a601030e3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.042083 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22261634-3cd2-4faf-9264-9234fa4b43ca" (UID: "22261634-3cd2-4faf-9264-9234fa4b43ca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.045666 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad61e418-7135-4561-af80-28a601030e3b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ad61e418-7135-4561-af80-28a601030e3b" (UID: "ad61e418-7135-4561-af80-28a601030e3b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.046384 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.048656 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-56687d89b8-gsngq"] Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.053361 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ad61e418-7135-4561-af80-28a601030e3b" (UID: "ad61e418-7135-4561-af80-28a601030e3b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.078792 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-scripts" (OuterVolumeSpecName: "scripts") pod "ad61e418-7135-4561-af80-28a601030e3b" (UID: "ad61e418-7135-4561-af80-28a601030e3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.081521 4880 scope.go:117] "RemoveContainer" containerID="44ab0fefae3315753fb1181ff2051a0f5c3eb3b66b5cac2a27675746879db49f" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.082159 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad61e418-7135-4561-af80-28a601030e3b-kube-api-access-98j47" (OuterVolumeSpecName: "kube-api-access-98j47") pod "ad61e418-7135-4561-af80-28a601030e3b" (UID: "ad61e418-7135-4561-af80-28a601030e3b"). InnerVolumeSpecName "kube-api-access-98j47". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: W1201 03:15:27.118211 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd654d580_4832_43cf_b7b9_e91cca241869.slice/crio-7d67e9c58a89ce17f0c0070daa6f245456c4fa396c5df810fb3b009a69e8ab43 WatchSource:0}: Error finding container 7d67e9c58a89ce17f0c0070daa6f245456c4fa396c5df810fb3b009a69e8ab43: Status 404 returned error can't find the container with id 7d67e9c58a89ce17f0c0070daa6f245456c4fa396c5df810fb3b009a69e8ab43 Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.152709 4880 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.152735 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.152743 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98j47\" (UniqueName: \"kubernetes.io/projected/ad61e418-7135-4561-af80-28a601030e3b-kube-api-access-98j47\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.152754 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad61e418-7135-4561-af80-28a601030e3b-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.152764 4880 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad61e418-7135-4561-af80-28a601030e3b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.152771 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.152779 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.178172 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-config" (OuterVolumeSpecName: "config") pod "22261634-3cd2-4faf-9264-9234fa4b43ca" (UID: "22261634-3cd2-4faf-9264-9234fa4b43ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.187160 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22261634-3cd2-4faf-9264-9234fa4b43ca" (UID: "22261634-3cd2-4faf-9264-9234fa4b43ca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.254609 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.254986 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22261634-3cd2-4faf-9264-9234fa4b43ca-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.255360 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad61e418-7135-4561-af80-28a601030e3b" (UID: "ad61e418-7135-4561-af80-28a601030e3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.307694 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data" (OuterVolumeSpecName: "config-data") pod "ad61e418-7135-4561-af80-28a601030e3b" (UID: "ad61e418-7135-4561-af80-28a601030e3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.361489 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.361529 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad61e418-7135-4561-af80-28a601030e3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.554416 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.580853 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-585499bb75-ggpgg"] Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.588231 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.620288 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-585499bb75-ggpgg"] Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.673401 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data-custom\") pod \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.673701 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data-custom\") pod \"6d2263fe-480f-439e-8367-06dd063f952e\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.673770 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgjkn\" (UniqueName: \"kubernetes.io/projected/6d2263fe-480f-439e-8367-06dd063f952e-kube-api-access-mgjkn\") pod \"6d2263fe-480f-439e-8367-06dd063f952e\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.673908 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-combined-ca-bundle\") pod \"6d2263fe-480f-439e-8367-06dd063f952e\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.673936 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data\") pod \"6d2263fe-480f-439e-8367-06dd063f952e\" (UID: \"6d2263fe-480f-439e-8367-06dd063f952e\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.673961 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data\") pod \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.673989 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-combined-ca-bundle\") pod \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.674013 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9bt5\" (UniqueName: \"kubernetes.io/projected/cb1b1ac3-2d06-47e5-be03-dca35c8605be-kube-api-access-g9bt5\") pod \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\" (UID: \"cb1b1ac3-2d06-47e5-be03-dca35c8605be\") " Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.695284 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d2263fe-480f-439e-8367-06dd063f952e-kube-api-access-mgjkn" (OuterVolumeSpecName: "kube-api-access-mgjkn") pod "6d2263fe-480f-439e-8367-06dd063f952e" (UID: "6d2263fe-480f-439e-8367-06dd063f952e"). InnerVolumeSpecName "kube-api-access-mgjkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.697176 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cb1b1ac3-2d06-47e5-be03-dca35c8605be" (UID: "cb1b1ac3-2d06-47e5-be03-dca35c8605be"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.736725 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6d2263fe-480f-439e-8367-06dd063f952e" (UID: "6d2263fe-480f-439e-8367-06dd063f952e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.751766 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1b1ac3-2d06-47e5-be03-dca35c8605be-kube-api-access-g9bt5" (OuterVolumeSpecName: "kube-api-access-g9bt5") pod "cb1b1ac3-2d06-47e5-be03-dca35c8605be" (UID: "cb1b1ac3-2d06-47e5-be03-dca35c8605be"). InnerVolumeSpecName "kube-api-access-g9bt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.754407 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-789ffb5f5c-5pxbw"] Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.777316 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9bt5\" (UniqueName: \"kubernetes.io/projected/cb1b1ac3-2d06-47e5-be03-dca35c8605be-kube-api-access-g9bt5\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.777368 4880 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.777378 4880 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.777386 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgjkn\" (UniqueName: \"kubernetes.io/projected/6d2263fe-480f-439e-8367-06dd063f952e-kube-api-access-mgjkn\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.954842 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5df6945c99-72tng"] Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.973722 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb1b1ac3-2d06-47e5-be03-dca35c8605be" (UID: "cb1b1ac3-2d06-47e5-be03-dca35c8605be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:27 crc kubenswrapper[4880]: I1201 03:15:27.979998 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"8fa73d5a87af237b0d0a9c3f24f3c3545af69a32a8108a4ef1e39e8382145766"} Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.009155 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d2263fe-480f-439e-8367-06dd063f952e" (UID: "6d2263fe-480f-439e-8367-06dd063f952e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.024813 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-56687d89b8-gsngq" event={"ID":"d654d580-4832-43cf-b7b9-e91cca241869","Type":"ContainerStarted","Data":"7d67e9c58a89ce17f0c0070daa6f245456c4fa396c5df810fb3b009a69e8ab43"} Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.084465 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.084497 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.090011 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ad61e418-7135-4561-af80-28a601030e3b","Type":"ContainerDied","Data":"b628f301e210bdcc76839f33028508da81ccc76376d5e06471d686cc8e0a60e9"} Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.090061 4880 scope.go:117] "RemoveContainer" containerID="dd3de9c28a81ad0cc2e5d47aa8e0ff0cceff227768670c142654202d23e105f0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.090286 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.109542 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5c8ef62e-f8bf-4982-9c80-9a52bb538621","Type":"ContainerStarted","Data":"52eeb1e86c3eb9d8f628391d5d142be5fcd1df87aaa8963f23579cafbdff058c"} Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.116524 4880 generic.go:334] "Generic (PLEG): container finished" podID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerID="a9d7e883ebd01133d3eb1c141114d86ecdd1ba6fce788be2ba85f5873c0ce8aa" exitCode=0 Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.116603 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerDied","Data":"a9d7e883ebd01133d3eb1c141114d86ecdd1ba6fce788be2ba85f5873c0ce8aa"} Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.119046 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-558d57d895-k4fk5" event={"ID":"cb1b1ac3-2d06-47e5-be03-dca35c8605be","Type":"ContainerDied","Data":"99a2b2ccd94094c69ea8007a71b96c917487d55a87d44fab7e1584db2c064149"} Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.119118 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-558d57d895-k4fk5" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.119399 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-f9df6f645-phc82"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.122398 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" event={"ID":"de08a81c-8cfb-4116-8f93-25192b8f205e","Type":"ContainerStarted","Data":"a75474c28727f09820836159651a979e81a20604ce0fa1f73a79d0e4b46a1b92"} Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.133347 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67cbdfff6f-zbn4x" event={"ID":"6d2263fe-480f-439e-8367-06dd063f952e","Type":"ContainerDied","Data":"f7fc1539e6555183df13697f1f2f6cbf0a31f2c71571ffc7c91e5acf665cd2a4"} Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.134303 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67cbdfff6f-zbn4x" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.169456 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-689598d56f-hm2sf"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.214725 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.672858765 podStartE2EDuration="19.214709424s" podCreationTimestamp="2025-12-01 03:15:09 +0000 UTC" firstStartedPulling="2025-12-01 03:15:09.936291849 +0000 UTC m=+1139.447546221" lastFinishedPulling="2025-12-01 03:15:26.478142508 +0000 UTC m=+1155.989396880" observedRunningTime="2025-12-01 03:15:28.128104796 +0000 UTC m=+1157.639359168" watchObservedRunningTime="2025-12-01 03:15:28.214709424 +0000 UTC m=+1157.725963796" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.261390 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data" (OuterVolumeSpecName: "config-data") pod "6d2263fe-480f-439e-8367-06dd063f952e" (UID: "6d2263fe-480f-439e-8367-06dd063f952e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.271735 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data" (OuterVolumeSpecName: "config-data") pod "cb1b1ac3-2d06-47e5-be03-dca35c8605be" (UID: "cb1b1ac3-2d06-47e5-be03-dca35c8605be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.273262 4880 scope.go:117] "RemoveContainer" containerID="5da5fe1ed2572c5189882f5b93aae833278063bb0d054b3163cd6c11ac8028fe" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.290393 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7467d8cff5-62dbn"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.299818 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d2263fe-480f-439e-8367-06dd063f952e-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.299845 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb1b1ac3-2d06-47e5-be03-dca35c8605be-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.578593 4880 scope.go:117] "RemoveContainer" containerID="051043c9ec951382d04197f31f174423dce8aa44329af96960966d39e488e0d5" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.613466 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.618628 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-67cbdfff6f-zbn4x"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.644440 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-67cbdfff6f-zbn4x"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.688675 4880 scope.go:117] "RemoveContainer" containerID="a4e792d60a1d7e5e89d0d3dae093cee383fa962a21058cb05d426402d90cd639" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.689467 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.709736 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.715445 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv7pg\" (UniqueName: \"kubernetes.io/projected/e50f15dc-ed5f-4f63-872d-645d388b3d18-kube-api-access-pv7pg\") pod \"e50f15dc-ed5f-4f63-872d-645d388b3d18\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.715632 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-combined-ca-bundle\") pod \"e50f15dc-ed5f-4f63-872d-645d388b3d18\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.715690 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-scripts\") pod \"e50f15dc-ed5f-4f63-872d-645d388b3d18\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.715722 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-log-httpd\") pod \"e50f15dc-ed5f-4f63-872d-645d388b3d18\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.715768 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-run-httpd\") pod \"e50f15dc-ed5f-4f63-872d-645d388b3d18\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.715859 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-config-data\") pod \"e50f15dc-ed5f-4f63-872d-645d388b3d18\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.715925 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-sg-core-conf-yaml\") pod \"e50f15dc-ed5f-4f63-872d-645d388b3d18\" (UID: \"e50f15dc-ed5f-4f63-872d-645d388b3d18\") " Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.718185 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-558d57d895-k4fk5"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.720817 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e50f15dc-ed5f-4f63-872d-645d388b3d18" (UID: "e50f15dc-ed5f-4f63-872d-645d388b3d18"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.721667 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e50f15dc-ed5f-4f63-872d-645d388b3d18" (UID: "e50f15dc-ed5f-4f63-872d-645d388b3d18"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.733297 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-558d57d895-k4fk5"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.735291 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-scripts" (OuterVolumeSpecName: "scripts") pod "e50f15dc-ed5f-4f63-872d-645d388b3d18" (UID: "e50f15dc-ed5f-4f63-872d-645d388b3d18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.736463 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e50f15dc-ed5f-4f63-872d-645d388b3d18-kube-api-access-pv7pg" (OuterVolumeSpecName: "kube-api-access-pv7pg") pod "e50f15dc-ed5f-4f63-872d-645d388b3d18" (UID: "e50f15dc-ed5f-4f63-872d-645d388b3d18"). InnerVolumeSpecName "kube-api-access-pv7pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.739201 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740541 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22261634-3cd2-4faf-9264-9234fa4b43ca" containerName="init" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740560 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="22261634-3cd2-4faf-9264-9234fa4b43ca" containerName="init" Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740584 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="ceilometer-central-agent" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740590 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="ceilometer-central-agent" Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740601 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad61e418-7135-4561-af80-28a601030e3b" containerName="cinder-api-log" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740607 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad61e418-7135-4561-af80-28a601030e3b" containerName="cinder-api-log" Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740619 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="sg-core" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740625 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="sg-core" Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740637 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" containerName="heat-cfnapi" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740643 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" containerName="heat-cfnapi" Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740650 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="ceilometer-notification-agent" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740656 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="ceilometer-notification-agent" Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740667 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2263fe-480f-439e-8367-06dd063f952e" containerName="heat-api" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740673 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2263fe-480f-439e-8367-06dd063f952e" containerName="heat-api" Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740692 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="proxy-httpd" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740698 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="proxy-httpd" Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740710 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad61e418-7135-4561-af80-28a601030e3b" containerName="cinder-api" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740716 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad61e418-7135-4561-af80-28a601030e3b" containerName="cinder-api" Dec 01 03:15:28 crc kubenswrapper[4880]: E1201 03:15:28.740726 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22261634-3cd2-4faf-9264-9234fa4b43ca" containerName="dnsmasq-dns" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740732 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="22261634-3cd2-4faf-9264-9234fa4b43ca" containerName="dnsmasq-dns" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740914 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="ceilometer-notification-agent" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740930 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad61e418-7135-4561-af80-28a601030e3b" containerName="cinder-api" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740938 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="proxy-httpd" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740946 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d2263fe-480f-439e-8367-06dd063f952e" containerName="heat-api" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740953 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" containerName="heat-cfnapi" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740965 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="ceilometer-central-agent" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740978 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="22261634-3cd2-4faf-9264-9234fa4b43ca" containerName="dnsmasq-dns" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740986 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad61e418-7135-4561-af80-28a601030e3b" containerName="cinder-api-log" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.740996 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" containerName="sg-core" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.743148 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.750399 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.750639 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.750922 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.758381 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819270 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a17482-31bd-4eb3-bbf0-db0a24905c39-logs\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819315 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-scripts\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819377 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-config-data-custom\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819405 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819438 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0a17482-31bd-4eb3-bbf0-db0a24905c39-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819534 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdtv6\" (UniqueName: \"kubernetes.io/projected/e0a17482-31bd-4eb3-bbf0-db0a24905c39-kube-api-access-cdtv6\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819557 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-config-data\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819574 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819605 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819649 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819660 4880 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819670 4880 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e50f15dc-ed5f-4f63-872d-645d388b3d18-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.819679 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv7pg\" (UniqueName: \"kubernetes.io/projected/e50f15dc-ed5f-4f63-872d-645d388b3d18-kube-api-access-pv7pg\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.820336 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22261634-3cd2-4faf-9264-9234fa4b43ca" path="/var/lib/kubelet/pods/22261634-3cd2-4faf-9264-9234fa4b43ca/volumes" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.821055 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d2263fe-480f-439e-8367-06dd063f952e" path="/var/lib/kubelet/pods/6d2263fe-480f-439e-8367-06dd063f952e/volumes" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.821576 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad61e418-7135-4561-af80-28a601030e3b" path="/var/lib/kubelet/pods/ad61e418-7135-4561-af80-28a601030e3b/volumes" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.827218 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb1b1ac3-2d06-47e5-be03-dca35c8605be" path="/var/lib/kubelet/pods/cb1b1ac3-2d06-47e5-be03-dca35c8605be/volumes" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.860495 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e50f15dc-ed5f-4f63-872d-645d388b3d18" (UID: "e50f15dc-ed5f-4f63-872d-645d388b3d18"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.921846 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdtv6\" (UniqueName: \"kubernetes.io/projected/e0a17482-31bd-4eb3-bbf0-db0a24905c39-kube-api-access-cdtv6\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.922717 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-config-data\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.922833 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.922981 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.923064 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a17482-31bd-4eb3-bbf0-db0a24905c39-logs\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.923130 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-scripts\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.923217 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-config-data-custom\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.923296 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.924340 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0a17482-31bd-4eb3-bbf0-db0a24905c39-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.924547 4880 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.924658 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e0a17482-31bd-4eb3-bbf0-db0a24905c39-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.924968 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a17482-31bd-4eb3-bbf0-db0a24905c39-logs\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.927947 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.934845 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.945276 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdtv6\" (UniqueName: \"kubernetes.io/projected/e0a17482-31bd-4eb3-bbf0-db0a24905c39-kube-api-access-cdtv6\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.947214 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-config-data-custom\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.948061 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-config-data\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.948492 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-scripts\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:28 crc kubenswrapper[4880]: I1201 03:15:28.956572 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a17482-31bd-4eb3-bbf0-db0a24905c39-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e0a17482-31bd-4eb3-bbf0-db0a24905c39\") " pod="openstack/cinder-api-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.065332 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.083099 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e50f15dc-ed5f-4f63-872d-645d388b3d18" (UID: "e50f15dc-ed5f-4f63-872d-645d388b3d18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.127775 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.147185 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-config-data" (OuterVolumeSpecName: "config-data") pod "e50f15dc-ed5f-4f63-872d-645d388b3d18" (UID: "e50f15dc-ed5f-4f63-872d-645d388b3d18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.158646 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" event={"ID":"de08a81c-8cfb-4116-8f93-25192b8f205e","Type":"ContainerStarted","Data":"d54e48dbc2ee544fbdc43c1940b3beab6e07964589621cbbece8b950aac14ab2"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.159084 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.180345 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-689598d56f-hm2sf" event={"ID":"2b6fa45f-b959-4fae-958d-06f32307b7d7","Type":"ContainerStarted","Data":"34ba766f0b0bb38387d02b6cc3a182ee09e98d79af0bc5d8a2681f0cfe9d2841"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.180378 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-689598d56f-hm2sf" event={"ID":"2b6fa45f-b959-4fae-958d-06f32307b7d7","Type":"ContainerStarted","Data":"461e5b8e620f86da9114f89eb103e9328e420fdc360f6bb14d84987a42c483f0"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.181124 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.192318 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" podStartSLOduration=9.192297304 podStartE2EDuration="9.192297304s" podCreationTimestamp="2025-12-01 03:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:29.183239845 +0000 UTC m=+1158.694494217" watchObservedRunningTime="2025-12-01 03:15:29.192297304 +0000 UTC m=+1158.703551676" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.197433 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-56687d89b8-gsngq" event={"ID":"d654d580-4832-43cf-b7b9-e91cca241869","Type":"ContainerStarted","Data":"65b771419422a7cb5ba9e7f5946f783a7a2c0696acdf4d095dfb04371b3d6e09"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.197803 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.206455 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7467d8cff5-62dbn" event={"ID":"345c72ac-f2df-430d-8a61-9416bdda67a9","Type":"ContainerStarted","Data":"a5cadebbba67d1f3461816fea9bf5ce91f4cdff524edb717c642a7580a98284d"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.206500 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7467d8cff5-62dbn" event={"ID":"345c72ac-f2df-430d-8a61-9416bdda67a9","Type":"ContainerStarted","Data":"51a7beef822a493e673ddd5fa25818193bf199077a681f727fe174bc32b43ee8"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.230605 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e50f15dc-ed5f-4f63-872d-645d388b3d18-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.233811 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-689598d56f-hm2sf" podStartSLOduration=10.233794632 podStartE2EDuration="10.233794632s" podCreationTimestamp="2025-12-01 03:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:29.216046914 +0000 UTC m=+1158.727301286" watchObservedRunningTime="2025-12-01 03:15:29.233794632 +0000 UTC m=+1158.745049004" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.242074 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e50f15dc-ed5f-4f63-872d-645d388b3d18","Type":"ContainerDied","Data":"71545db480525d418feaa2e57e424ad16733ce6ebd25fe7539c0c6168e7ad508"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.242142 4880 scope.go:117] "RemoveContainer" containerID="aef8c5cf6096aff041c580bb71c77a555c9de81c6d2fdab2ba10ec78bfa44b2d" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.242208 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.257077 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-56687d89b8-gsngq" podStartSLOduration=9.25705845 podStartE2EDuration="9.25705845s" podCreationTimestamp="2025-12-01 03:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:29.240258716 +0000 UTC m=+1158.751513088" watchObservedRunningTime="2025-12-01 03:15:29.25705845 +0000 UTC m=+1158.768312822" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.261763 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5df6945c99-72tng" event={"ID":"697faf05-a750-48e7-be79-d66e35720ef0","Type":"ContainerStarted","Data":"2b4d01a0897ee3d017528f3aaf1d29080e248b9d191dbe8a8da5621ca1c6754c"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.261812 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5df6945c99-72tng" event={"ID":"697faf05-a750-48e7-be79-d66e35720ef0","Type":"ContainerStarted","Data":"e216aa5c73d876924e091a41a95122c55a74c0defeeb55adf788ab1c9505acb0"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.261852 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.264988 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f9df6f645-phc82" event={"ID":"84f52e98-a6f5-4212-bff7-6980ee04ddaa","Type":"ContainerStarted","Data":"3c30a259c3bd6d0fc0498291e917c043ded622dc5a14e5e937b39ded2a330745"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.265029 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f9df6f645-phc82" event={"ID":"84f52e98-a6f5-4212-bff7-6980ee04ddaa","Type":"ContainerStarted","Data":"870a86d3b226564f966187e6aa54963e1fe846c9a68245d45809442dbb714ea3"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.265182 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.273289 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ddc7fc844-5qd9h" event={"ID":"182db9c6-4756-4acb-a228-a1fe3fe7a4dd","Type":"ContainerStarted","Data":"35441ddb895b7c9641ccfb5abb51fc60e28765d8c6dc4e3ba059e57ec43f3d30"} Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.273671 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6ddc7fc844-5qd9h" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" containerID="cri-o://35441ddb895b7c9641ccfb5abb51fc60e28765d8c6dc4e3ba059e57ec43f3d30" gracePeriod=30 Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.273617 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6ddc7fc844-5qd9h" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon-log" containerID="cri-o://a21857edb278cf8f3e444c37932515dbca958eaea72385984220acbeafa3688d" gracePeriod=30 Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.284524 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5df6945c99-72tng" podStartSLOduration=10.284505884 podStartE2EDuration="10.284505884s" podCreationTimestamp="2025-12-01 03:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:29.279482397 +0000 UTC m=+1158.790736779" watchObservedRunningTime="2025-12-01 03:15:29.284505884 +0000 UTC m=+1158.795760256" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.394989 4880 scope.go:117] "RemoveContainer" containerID="427c50ec3e3d1bbecb6fd33a4262e4ddb3f2d8ee7662921c2d8b684f98bfa43c" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.428135 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-f9df6f645-phc82" podStartSLOduration=10.428110891 podStartE2EDuration="10.428110891s" podCreationTimestamp="2025-12-01 03:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:29.344955181 +0000 UTC m=+1158.856209543" watchObservedRunningTime="2025-12-01 03:15:29.428110891 +0000 UTC m=+1158.939365263" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.525192 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.527769 4880 scope.go:117] "RemoveContainer" containerID="a9d7e883ebd01133d3eb1c141114d86ecdd1ba6fce788be2ba85f5873c0ce8aa" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.549198 4880 scope.go:117] "RemoveContainer" containerID="189cc505df6e96013da01bc755f07d64f4c74133b749d59a5cc99cf77a90114c" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.560452 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.565941 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.568279 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.571181 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.575199 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.586036 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.676626 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.676671 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j85xt\" (UniqueName: \"kubernetes.io/projected/3b493aa6-cf3f-497d-974e-d0e06f99c41b-kube-api-access-j85xt\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.676692 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-config-data\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.676728 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.676756 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-scripts\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.676891 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-run-httpd\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.676947 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-log-httpd\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.782119 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-run-httpd\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.782186 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-log-httpd\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.782210 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.782230 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j85xt\" (UniqueName: \"kubernetes.io/projected/3b493aa6-cf3f-497d-974e-d0e06f99c41b-kube-api-access-j85xt\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.782247 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-config-data\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.782273 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.782295 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-scripts\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.787004 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-run-httpd\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.787228 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-log-httpd\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.791778 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-scripts\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.794368 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.796201 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.798615 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-config-data\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.808267 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j85xt\" (UniqueName: \"kubernetes.io/projected/3b493aa6-cf3f-497d-974e-d0e06f99c41b-kube-api-access-j85xt\") pod \"ceilometer-0\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " pod="openstack/ceilometer-0" Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.826032 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 03:15:29 crc kubenswrapper[4880]: I1201 03:15:29.937130 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.288462 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7467d8cff5-62dbn" event={"ID":"345c72ac-f2df-430d-8a61-9416bdda67a9","Type":"ContainerStarted","Data":"4a3237d009339139d3bc90bb1dbc14c5ad4598649cab928924206367f9fd13de"} Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.288728 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.288741 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.291268 4880 generic.go:334] "Generic (PLEG): container finished" podID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" containerID="3c30a259c3bd6d0fc0498291e917c043ded622dc5a14e5e937b39ded2a330745" exitCode=1 Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.291338 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f9df6f645-phc82" event={"ID":"84f52e98-a6f5-4212-bff7-6980ee04ddaa","Type":"ContainerDied","Data":"3c30a259c3bd6d0fc0498291e917c043ded622dc5a14e5e937b39ded2a330745"} Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.291952 4880 scope.go:117] "RemoveContainer" containerID="3c30a259c3bd6d0fc0498291e917c043ded622dc5a14e5e937b39ded2a330745" Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.293824 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e0a17482-31bd-4eb3-bbf0-db0a24905c39","Type":"ContainerStarted","Data":"6daab4c4504647b79b576fedcf2d780bdece70d9c98d79607d0dbb91fd8a5966"} Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.299180 4880 generic.go:334] "Generic (PLEG): container finished" podID="2b6fa45f-b959-4fae-958d-06f32307b7d7" containerID="34ba766f0b0bb38387d02b6cc3a182ee09e98d79af0bc5d8a2681f0cfe9d2841" exitCode=1 Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.299293 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-689598d56f-hm2sf" event={"ID":"2b6fa45f-b959-4fae-958d-06f32307b7d7","Type":"ContainerDied","Data":"34ba766f0b0bb38387d02b6cc3a182ee09e98d79af0bc5d8a2681f0cfe9d2841"} Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.299890 4880 scope.go:117] "RemoveContainer" containerID="34ba766f0b0bb38387d02b6cc3a182ee09e98d79af0bc5d8a2681f0cfe9d2841" Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.324228 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7467d8cff5-62dbn" podStartSLOduration=8.324210762 podStartE2EDuration="8.324210762s" podCreationTimestamp="2025-12-01 03:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:30.306897724 +0000 UTC m=+1159.818152096" watchObservedRunningTime="2025-12-01 03:15:30.324210762 +0000 UTC m=+1159.835465134" Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.473895 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.811629 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e50f15dc-ed5f-4f63-872d-645d388b3d18" path="/var/lib/kubelet/pods/e50f15dc-ed5f-4f63-872d-645d388b3d18/volumes" Dec 01 03:15:30 crc kubenswrapper[4880]: I1201 03:15:30.865347 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.363155 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerStarted","Data":"7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6"} Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.363200 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerStarted","Data":"e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6"} Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.363209 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerStarted","Data":"bbea0488ae6764725c12c0543a00f3ca0f0f36c3e2a225862e69aace2c068924"} Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.378264 4880 generic.go:334] "Generic (PLEG): container finished" podID="2b6fa45f-b959-4fae-958d-06f32307b7d7" containerID="bfa05f8e7fe4ec009b8fda2cf15e0cd21c8165f708dfa867be0bfe9dc411f4de" exitCode=1 Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.378360 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-689598d56f-hm2sf" event={"ID":"2b6fa45f-b959-4fae-958d-06f32307b7d7","Type":"ContainerDied","Data":"bfa05f8e7fe4ec009b8fda2cf15e0cd21c8165f708dfa867be0bfe9dc411f4de"} Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.378404 4880 scope.go:117] "RemoveContainer" containerID="34ba766f0b0bb38387d02b6cc3a182ee09e98d79af0bc5d8a2681f0cfe9d2841" Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.378829 4880 scope.go:117] "RemoveContainer" containerID="bfa05f8e7fe4ec009b8fda2cf15e0cd21c8165f708dfa867be0bfe9dc411f4de" Dec 01 03:15:31 crc kubenswrapper[4880]: E1201 03:15:31.379233 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-689598d56f-hm2sf_openstack(2b6fa45f-b959-4fae-958d-06f32307b7d7)\"" pod="openstack/heat-api-689598d56f-hm2sf" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.395710 4880 generic.go:334] "Generic (PLEG): container finished" podID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" containerID="b4fd6cff0eb7d1c286ab99378092e9ca98740a62ffcc749912c6759b5c63b40c" exitCode=1 Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.395797 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f9df6f645-phc82" event={"ID":"84f52e98-a6f5-4212-bff7-6980ee04ddaa","Type":"ContainerDied","Data":"b4fd6cff0eb7d1c286ab99378092e9ca98740a62ffcc749912c6759b5c63b40c"} Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.396492 4880 scope.go:117] "RemoveContainer" containerID="b4fd6cff0eb7d1c286ab99378092e9ca98740a62ffcc749912c6759b5c63b40c" Dec 01 03:15:31 crc kubenswrapper[4880]: E1201 03:15:31.396702 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-f9df6f645-phc82_openstack(84f52e98-a6f5-4212-bff7-6980ee04ddaa)\"" pod="openstack/heat-cfnapi-f9df6f645-phc82" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.407964 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e0a17482-31bd-4eb3-bbf0-db0a24905c39","Type":"ContainerStarted","Data":"cee959040a359d3ff983259f57f00caea36ce8b3416638803db611c58fcbbcf6"} Dec 01 03:15:31 crc kubenswrapper[4880]: I1201 03:15:31.459261 4880 scope.go:117] "RemoveContainer" containerID="3c30a259c3bd6d0fc0498291e917c043ded622dc5a14e5e937b39ded2a330745" Dec 01 03:15:32 crc kubenswrapper[4880]: I1201 03:15:32.417741 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerStarted","Data":"eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8"} Dec 01 03:15:32 crc kubenswrapper[4880]: I1201 03:15:32.420670 4880 scope.go:117] "RemoveContainer" containerID="bfa05f8e7fe4ec009b8fda2cf15e0cd21c8165f708dfa867be0bfe9dc411f4de" Dec 01 03:15:32 crc kubenswrapper[4880]: E1201 03:15:32.421017 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-689598d56f-hm2sf_openstack(2b6fa45f-b959-4fae-958d-06f32307b7d7)\"" pod="openstack/heat-api-689598d56f-hm2sf" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" Dec 01 03:15:32 crc kubenswrapper[4880]: I1201 03:15:32.422852 4880 scope.go:117] "RemoveContainer" containerID="b4fd6cff0eb7d1c286ab99378092e9ca98740a62ffcc749912c6759b5c63b40c" Dec 01 03:15:32 crc kubenswrapper[4880]: E1201 03:15:32.423051 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-f9df6f645-phc82_openstack(84f52e98-a6f5-4212-bff7-6980ee04ddaa)\"" pod="openstack/heat-cfnapi-f9df6f645-phc82" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" Dec 01 03:15:32 crc kubenswrapper[4880]: I1201 03:15:32.424383 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e0a17482-31bd-4eb3-bbf0-db0a24905c39","Type":"ContainerStarted","Data":"a1e5cc3e9eba8024762fb08ebcfb8af0c7deb74296a3be34f89acb02b05c6484"} Dec 01 03:15:32 crc kubenswrapper[4880]: I1201 03:15:32.424919 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 01 03:15:33 crc kubenswrapper[4880]: I1201 03:15:33.450774 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.450756376 podStartE2EDuration="5.450756376s" podCreationTimestamp="2025-12-01 03:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:32.530588078 +0000 UTC m=+1162.041842450" watchObservedRunningTime="2025-12-01 03:15:33.450756376 +0000 UTC m=+1162.962010748" Dec 01 03:15:33 crc kubenswrapper[4880]: I1201 03:15:33.453544 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.446299 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="ceilometer-central-agent" containerID="cri-o://e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6" gracePeriod=30 Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.446555 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerStarted","Data":"c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58"} Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.446590 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.446823 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="proxy-httpd" containerID="cri-o://c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58" gracePeriod=30 Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.446864 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="sg-core" containerID="cri-o://eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8" gracePeriod=30 Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.446910 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="ceilometer-notification-agent" containerID="cri-o://7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6" gracePeriod=30 Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.466844 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.466894 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.467478 4880 scope.go:117] "RemoveContainer" containerID="bfa05f8e7fe4ec009b8fda2cf15e0cd21c8165f708dfa867be0bfe9dc411f4de" Dec 01 03:15:34 crc kubenswrapper[4880]: E1201 03:15:34.467673 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-689598d56f-hm2sf_openstack(2b6fa45f-b959-4fae-958d-06f32307b7d7)\"" pod="openstack/heat-api-689598d56f-hm2sf" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.474645 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.79054891 podStartE2EDuration="5.474627375s" podCreationTimestamp="2025-12-01 03:15:29 +0000 UTC" firstStartedPulling="2025-12-01 03:15:30.489705723 +0000 UTC m=+1160.000960095" lastFinishedPulling="2025-12-01 03:15:33.173784188 +0000 UTC m=+1162.685038560" observedRunningTime="2025-12-01 03:15:34.471960457 +0000 UTC m=+1163.983214819" watchObservedRunningTime="2025-12-01 03:15:34.474627375 +0000 UTC m=+1163.985881747" Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.480156 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.480192 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:34 crc kubenswrapper[4880]: I1201 03:15:34.480835 4880 scope.go:117] "RemoveContainer" containerID="b4fd6cff0eb7d1c286ab99378092e9ca98740a62ffcc749912c6759b5c63b40c" Dec 01 03:15:34 crc kubenswrapper[4880]: E1201 03:15:34.481202 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-f9df6f645-phc82_openstack(84f52e98-a6f5-4212-bff7-6980ee04ddaa)\"" pod="openstack/heat-cfnapi-f9df6f645-phc82" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" Dec 01 03:15:35 crc kubenswrapper[4880]: I1201 03:15:35.276481 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:15:35 crc kubenswrapper[4880]: I1201 03:15:35.473919 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerDied","Data":"c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58"} Dec 01 03:15:35 crc kubenswrapper[4880]: I1201 03:15:35.473926 4880 generic.go:334] "Generic (PLEG): container finished" podID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerID="c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58" exitCode=0 Dec 01 03:15:35 crc kubenswrapper[4880]: I1201 03:15:35.473999 4880 generic.go:334] "Generic (PLEG): container finished" podID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerID="eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8" exitCode=2 Dec 01 03:15:35 crc kubenswrapper[4880]: I1201 03:15:35.474016 4880 generic.go:334] "Generic (PLEG): container finished" podID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerID="7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6" exitCode=0 Dec 01 03:15:35 crc kubenswrapper[4880]: I1201 03:15:35.474032 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerDied","Data":"eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8"} Dec 01 03:15:35 crc kubenswrapper[4880]: I1201 03:15:35.474042 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerDied","Data":"7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6"} Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.656211 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dpchb"] Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.657704 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.676586 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dpchb"] Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.751214 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92kgt\" (UniqueName: \"kubernetes.io/projected/a2489b51-6f7f-4f48-b614-870ab86df12a-kube-api-access-92kgt\") pod \"nova-api-db-create-dpchb\" (UID: \"a2489b51-6f7f-4f48-b614-870ab86df12a\") " pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.751264 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2489b51-6f7f-4f48-b614-870ab86df12a-operator-scripts\") pod \"nova-api-db-create-dpchb\" (UID: \"a2489b51-6f7f-4f48-b614-870ab86df12a\") " pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.853708 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92kgt\" (UniqueName: \"kubernetes.io/projected/a2489b51-6f7f-4f48-b614-870ab86df12a-kube-api-access-92kgt\") pod \"nova-api-db-create-dpchb\" (UID: \"a2489b51-6f7f-4f48-b614-870ab86df12a\") " pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.853754 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2489b51-6f7f-4f48-b614-870ab86df12a-operator-scripts\") pod \"nova-api-db-create-dpchb\" (UID: \"a2489b51-6f7f-4f48-b614-870ab86df12a\") " pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.854837 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2489b51-6f7f-4f48-b614-870ab86df12a-operator-scripts\") pod \"nova-api-db-create-dpchb\" (UID: \"a2489b51-6f7f-4f48-b614-870ab86df12a\") " pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.909270 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92kgt\" (UniqueName: \"kubernetes.io/projected/a2489b51-6f7f-4f48-b614-870ab86df12a-kube-api-access-92kgt\") pod \"nova-api-db-create-dpchb\" (UID: \"a2489b51-6f7f-4f48-b614-870ab86df12a\") " pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.929554 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6403-account-create-update-89c7r"] Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.930806 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.937901 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.947062 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6403-account-create-update-89c7r"] Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.955913 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9z9m\" (UniqueName: \"kubernetes.io/projected/4fa52299-6b4d-47d0-8250-4707b96770f9-kube-api-access-n9z9m\") pod \"nova-api-6403-account-create-update-89c7r\" (UID: \"4fa52299-6b4d-47d0-8250-4707b96770f9\") " pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.956066 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa52299-6b4d-47d0-8250-4707b96770f9-operator-scripts\") pod \"nova-api-6403-account-create-update-89c7r\" (UID: \"4fa52299-6b4d-47d0-8250-4707b96770f9\") " pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:37 crc kubenswrapper[4880]: I1201 03:15:37.974315 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.021388 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xxfdq"] Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.022631 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.040250 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.042185 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xxfdq"] Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.060522 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7467d8cff5-62dbn" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.065860 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9z9m\" (UniqueName: \"kubernetes.io/projected/4fa52299-6b4d-47d0-8250-4707b96770f9-kube-api-access-n9z9m\") pod \"nova-api-6403-account-create-update-89c7r\" (UID: \"4fa52299-6b4d-47d0-8250-4707b96770f9\") " pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.065927 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njlgz\" (UniqueName: \"kubernetes.io/projected/0bef3f01-6342-40a7-9213-9358a20b7efe-kube-api-access-njlgz\") pod \"nova-cell0-db-create-xxfdq\" (UID: \"0bef3f01-6342-40a7-9213-9358a20b7efe\") " pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.066021 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa52299-6b4d-47d0-8250-4707b96770f9-operator-scripts\") pod \"nova-api-6403-account-create-update-89c7r\" (UID: \"4fa52299-6b4d-47d0-8250-4707b96770f9\") " pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.066085 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bef3f01-6342-40a7-9213-9358a20b7efe-operator-scripts\") pod \"nova-cell0-db-create-xxfdq\" (UID: \"0bef3f01-6342-40a7-9213-9358a20b7efe\") " pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.069256 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa52299-6b4d-47d0-8250-4707b96770f9-operator-scripts\") pod \"nova-api-6403-account-create-update-89c7r\" (UID: \"4fa52299-6b4d-47d0-8250-4707b96770f9\") " pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.180667 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bef3f01-6342-40a7-9213-9358a20b7efe-operator-scripts\") pod \"nova-cell0-db-create-xxfdq\" (UID: \"0bef3f01-6342-40a7-9213-9358a20b7efe\") " pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.180794 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njlgz\" (UniqueName: \"kubernetes.io/projected/0bef3f01-6342-40a7-9213-9358a20b7efe-kube-api-access-njlgz\") pod \"nova-cell0-db-create-xxfdq\" (UID: \"0bef3f01-6342-40a7-9213-9358a20b7efe\") " pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.212089 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bef3f01-6342-40a7-9213-9358a20b7efe-operator-scripts\") pod \"nova-cell0-db-create-xxfdq\" (UID: \"0bef3f01-6342-40a7-9213-9358a20b7efe\") " pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.233094 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9z9m\" (UniqueName: \"kubernetes.io/projected/4fa52299-6b4d-47d0-8250-4707b96770f9-kube-api-access-n9z9m\") pod \"nova-api-6403-account-create-update-89c7r\" (UID: \"4fa52299-6b4d-47d0-8250-4707b96770f9\") " pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.250920 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-sfwjq"] Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.252784 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.288687 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njlgz\" (UniqueName: \"kubernetes.io/projected/0bef3f01-6342-40a7-9213-9358a20b7efe-kube-api-access-njlgz\") pod \"nova-cell0-db-create-xxfdq\" (UID: \"0bef3f01-6342-40a7-9213-9358a20b7efe\") " pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.290025 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.291113 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-operator-scripts\") pod \"nova-cell1-db-create-sfwjq\" (UID: \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\") " pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.291166 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlz9z\" (UniqueName: \"kubernetes.io/projected/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-kube-api-access-tlz9z\") pod \"nova-cell1-db-create-sfwjq\" (UID: \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\") " pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.298072 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sfwjq"] Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.327556 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e8e4-account-create-update-2vhq8"] Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.330700 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.349029 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e8e4-account-create-update-2vhq8"] Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.349186 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.395822 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qf2c\" (UniqueName: \"kubernetes.io/projected/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-kube-api-access-6qf2c\") pod \"nova-cell0-e8e4-account-create-update-2vhq8\" (UID: \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\") " pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.395911 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-operator-scripts\") pod \"nova-cell0-e8e4-account-create-update-2vhq8\" (UID: \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\") " pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.395948 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-operator-scripts\") pod \"nova-cell1-db-create-sfwjq\" (UID: \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\") " pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.395997 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlz9z\" (UniqueName: \"kubernetes.io/projected/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-kube-api-access-tlz9z\") pod \"nova-cell1-db-create-sfwjq\" (UID: \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\") " pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.396916 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-operator-scripts\") pod \"nova-cell1-db-create-sfwjq\" (UID: \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\") " pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.423250 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2946-account-create-update-dq78p"] Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.424557 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.425793 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlz9z\" (UniqueName: \"kubernetes.io/projected/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-kube-api-access-tlz9z\") pod \"nova-cell1-db-create-sfwjq\" (UID: \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\") " pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.427552 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.440704 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2946-account-create-update-dq78p"] Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.442041 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.499863 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xjb\" (UniqueName: \"kubernetes.io/projected/aed214c1-1d55-4063-929f-2c4b1d88f025-kube-api-access-n9xjb\") pod \"nova-cell1-2946-account-create-update-dq78p\" (UID: \"aed214c1-1d55-4063-929f-2c4b1d88f025\") " pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.500152 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed214c1-1d55-4063-929f-2c4b1d88f025-operator-scripts\") pod \"nova-cell1-2946-account-create-update-dq78p\" (UID: \"aed214c1-1d55-4063-929f-2c4b1d88f025\") " pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.500332 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qf2c\" (UniqueName: \"kubernetes.io/projected/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-kube-api-access-6qf2c\") pod \"nova-cell0-e8e4-account-create-update-2vhq8\" (UID: \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\") " pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.500445 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-operator-scripts\") pod \"nova-cell0-e8e4-account-create-update-2vhq8\" (UID: \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\") " pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.501358 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-operator-scripts\") pod \"nova-cell0-e8e4-account-create-update-2vhq8\" (UID: \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\") " pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.540167 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qf2c\" (UniqueName: \"kubernetes.io/projected/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-kube-api-access-6qf2c\") pod \"nova-cell0-e8e4-account-create-update-2vhq8\" (UID: \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\") " pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.593542 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-56687d89b8-gsngq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.602329 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9xjb\" (UniqueName: \"kubernetes.io/projected/aed214c1-1d55-4063-929f-2c4b1d88f025-kube-api-access-n9xjb\") pod \"nova-cell1-2946-account-create-update-dq78p\" (UID: \"aed214c1-1d55-4063-929f-2c4b1d88f025\") " pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.602435 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed214c1-1d55-4063-929f-2c4b1d88f025-operator-scripts\") pod \"nova-cell1-2946-account-create-update-dq78p\" (UID: \"aed214c1-1d55-4063-929f-2c4b1d88f025\") " pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.604930 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed214c1-1d55-4063-929f-2c4b1d88f025-operator-scripts\") pod \"nova-cell1-2946-account-create-update-dq78p\" (UID: \"aed214c1-1d55-4063-929f-2c4b1d88f025\") " pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.645552 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9xjb\" (UniqueName: \"kubernetes.io/projected/aed214c1-1d55-4063-929f-2c4b1d88f025-kube-api-access-n9xjb\") pod \"nova-cell1-2946-account-create-update-dq78p\" (UID: \"aed214c1-1d55-4063-929f-2c4b1d88f025\") " pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.688138 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.714852 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.746558 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-689598d56f-hm2sf"] Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.756806 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:38 crc kubenswrapper[4880]: I1201 03:15:38.845681 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dpchb"] Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.326893 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-789ffb5f5c-5pxbw" Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.401483 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6403-account-create-update-89c7r"] Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.486429 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-f9df6f645-phc82"] Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.575992 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dpchb" event={"ID":"a2489b51-6f7f-4f48-b614-870ab86df12a","Type":"ContainerStarted","Data":"5c04206b77fa85519c3eef98930d3f91456f5ca03afc11974cfd06a140dc6fbf"} Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.601495 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6403-account-create-update-89c7r" event={"ID":"4fa52299-6b4d-47d0-8250-4707b96770f9","Type":"ContainerStarted","Data":"2d90c4b901f9ea929cccf8863e560a3989d6b5a1e69bcf8b99aaaa02a63cdee6"} Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.604079 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.685047 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5df6945c99-72tng" Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.751139 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-combined-ca-bundle\") pod \"2b6fa45f-b959-4fae-958d-06f32307b7d7\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.751261 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7v6x\" (UniqueName: \"kubernetes.io/projected/2b6fa45f-b959-4fae-958d-06f32307b7d7-kube-api-access-p7v6x\") pod \"2b6fa45f-b959-4fae-958d-06f32307b7d7\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.751355 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data-custom\") pod \"2b6fa45f-b959-4fae-958d-06f32307b7d7\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.751469 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data\") pod \"2b6fa45f-b959-4fae-958d-06f32307b7d7\" (UID: \"2b6fa45f-b959-4fae-958d-06f32307b7d7\") " Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.762055 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-657545ccb7-km728"] Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.762470 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-657545ccb7-km728" podUID="f3ed19fa-d784-48ed-8770-35c150a1a24e" containerName="heat-engine" containerID="cri-o://a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" gracePeriod=60 Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.766156 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6fa45f-b959-4fae-958d-06f32307b7d7-kube-api-access-p7v6x" (OuterVolumeSpecName: "kube-api-access-p7v6x") pod "2b6fa45f-b959-4fae-958d-06f32307b7d7" (UID: "2b6fa45f-b959-4fae-958d-06f32307b7d7"). InnerVolumeSpecName "kube-api-access-p7v6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.772614 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xxfdq"] Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.808285 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2b6fa45f-b959-4fae-958d-06f32307b7d7" (UID: "2b6fa45f-b959-4fae-958d-06f32307b7d7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.843805 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b6fa45f-b959-4fae-958d-06f32307b7d7" (UID: "2b6fa45f-b959-4fae-958d-06f32307b7d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.856327 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.856362 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7v6x\" (UniqueName: \"kubernetes.io/projected/2b6fa45f-b959-4fae-958d-06f32307b7d7-kube-api-access-p7v6x\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.856373 4880 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:39 crc kubenswrapper[4880]: I1201 03:15:39.994202 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data" (OuterVolumeSpecName: "config-data") pod "2b6fa45f-b959-4fae-958d-06f32307b7d7" (UID: "2b6fa45f-b959-4fae-958d-06f32307b7d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:40 crc kubenswrapper[4880]: I1201 03:15:40.066210 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fa45f-b959-4fae-958d-06f32307b7d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:40 crc kubenswrapper[4880]: I1201 03:15:40.199432 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sfwjq"] Dec 01 03:15:40 crc kubenswrapper[4880]: I1201 03:15:40.322975 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2946-account-create-update-dq78p"] Dec 01 03:15:40 crc kubenswrapper[4880]: I1201 03:15:40.332763 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e8e4-account-create-update-2vhq8"] Dec 01 03:15:40 crc kubenswrapper[4880]: W1201 03:15:40.339327 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaed214c1_1d55_4063_929f_2c4b1d88f025.slice/crio-7ff06e5b3d05b1b41a324d744530a68b4cc8b8bbf159fc1eb5ebb76c007407a0 WatchSource:0}: Error finding container 7ff06e5b3d05b1b41a324d744530a68b4cc8b8bbf159fc1eb5ebb76c007407a0: Status 404 returned error can't find the container with id 7ff06e5b3d05b1b41a324d744530a68b4cc8b8bbf159fc1eb5ebb76c007407a0 Dec 01 03:15:40 crc kubenswrapper[4880]: W1201 03:15:40.340129 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38c5dd71_4081_4b8c_bb4c_c7e0087c7670.slice/crio-94ac393f3a791769d1e0c0b962bfd7a6d22d1459f95189fa31bc774add832eac WatchSource:0}: Error finding container 94ac393f3a791769d1e0c0b962bfd7a6d22d1459f95189fa31bc774add832eac: Status 404 returned error can't find the container with id 94ac393f3a791769d1e0c0b962bfd7a6d22d1459f95189fa31bc774add832eac Dec 01 03:15:41 crc kubenswrapper[4880]: E1201 03:15:41.708523 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.726902 4880 generic.go:334] "Generic (PLEG): container finished" podID="a2489b51-6f7f-4f48-b614-870ab86df12a" containerID="a6232350a32f7a10e748739f1dd8650e405671775321d90c82d8153da4212cb4" exitCode=0 Dec 01 03:15:41 crc kubenswrapper[4880]: E1201 03:15:41.790077 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 01 03:15:41 crc kubenswrapper[4880]: E1201 03:15:41.808896 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 01 03:15:41 crc kubenswrapper[4880]: E1201 03:15:41.808944 4880 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-657545ccb7-km728" podUID="f3ed19fa-d784-48ed-8770-35c150a1a24e" containerName="heat-engine" Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.837633 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.858270 4880 generic.go:334] "Generic (PLEG): container finished" podID="4fa52299-6b4d-47d0-8250-4707b96770f9" containerID="d793974c95a6850e01bbee10ff053831fd566ede014f05f8924bcf61b97e6a21" exitCode=0 Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.987070 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dpchb" event={"ID":"a2489b51-6f7f-4f48-b614-870ab86df12a","Type":"ContainerDied","Data":"a6232350a32f7a10e748739f1dd8650e405671775321d90c82d8153da4212cb4"} Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.987105 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2946-account-create-update-dq78p" event={"ID":"aed214c1-1d55-4063-929f-2c4b1d88f025","Type":"ContainerStarted","Data":"7ff06e5b3d05b1b41a324d744530a68b4cc8b8bbf159fc1eb5ebb76c007407a0"} Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.987119 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" event={"ID":"38c5dd71-4081-4b8c-bb4c-c7e0087c7670","Type":"ContainerStarted","Data":"94ac393f3a791769d1e0c0b962bfd7a6d22d1459f95189fa31bc774add832eac"} Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.987128 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-689598d56f-hm2sf" event={"ID":"2b6fa45f-b959-4fae-958d-06f32307b7d7","Type":"ContainerDied","Data":"461e5b8e620f86da9114f89eb103e9328e420fdc360f6bb14d84987a42c483f0"} Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.987146 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6403-account-create-update-89c7r" event={"ID":"4fa52299-6b4d-47d0-8250-4707b96770f9","Type":"ContainerDied","Data":"d793974c95a6850e01bbee10ff053831fd566ede014f05f8924bcf61b97e6a21"} Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.987163 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-f9df6f645-phc82" event={"ID":"84f52e98-a6f5-4212-bff7-6980ee04ddaa","Type":"ContainerDied","Data":"870a86d3b226564f966187e6aa54963e1fe846c9a68245d45809442dbb714ea3"} Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.987174 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870a86d3b226564f966187e6aa54963e1fe846c9a68245d45809442dbb714ea3" Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.987194 4880 scope.go:117] "RemoveContainer" containerID="bfa05f8e7fe4ec009b8fda2cf15e0cd21c8165f708dfa867be0bfe9dc411f4de" Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.989194 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sfwjq" event={"ID":"942e9ee3-6a18-426c-ad47-fe3ba8ae4213","Type":"ContainerStarted","Data":"7e15f927cfcac2b60574fb7db3763113f41cd33991e0e8229ad99cf186a04662"} Dec 01 03:15:41 crc kubenswrapper[4880]: I1201 03:15:41.989229 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sfwjq" event={"ID":"942e9ee3-6a18-426c-ad47-fe3ba8ae4213","Type":"ContainerStarted","Data":"be51a3b1010ba4e6848a6206c4bec6f0d59e506946ebe3d68a1178f5c4b72282"} Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.014312 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.031133 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xxfdq" event={"ID":"0bef3f01-6342-40a7-9213-9358a20b7efe","Type":"ContainerStarted","Data":"1da8d8ab16010feb0c8d380da920b5076cba417a620ba24b4707433ebf27bcc5"} Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.031177 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xxfdq" event={"ID":"0bef3f01-6342-40a7-9213-9358a20b7efe","Type":"ContainerStarted","Data":"cdec7a982c701c587302cc4e45da434fb1836efa11f801e2cc724531b8fb0426"} Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.068798 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data-custom\") pod \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.068910 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-combined-ca-bundle\") pod \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.068937 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnpwd\" (UniqueName: \"kubernetes.io/projected/84f52e98-a6f5-4212-bff7-6980ee04ddaa-kube-api-access-vnpwd\") pod \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.068956 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data\") pod \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\" (UID: \"84f52e98-a6f5-4212-bff7-6980ee04ddaa\") " Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.109075 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "84f52e98-a6f5-4212-bff7-6980ee04ddaa" (UID: "84f52e98-a6f5-4212-bff7-6980ee04ddaa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.118765 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f52e98-a6f5-4212-bff7-6980ee04ddaa-kube-api-access-vnpwd" (OuterVolumeSpecName: "kube-api-access-vnpwd") pod "84f52e98-a6f5-4212-bff7-6980ee04ddaa" (UID: "84f52e98-a6f5-4212-bff7-6980ee04ddaa"). InnerVolumeSpecName "kube-api-access-vnpwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.159101 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84f52e98-a6f5-4212-bff7-6980ee04ddaa" (UID: "84f52e98-a6f5-4212-bff7-6980ee04ddaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.180845 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.180888 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnpwd\" (UniqueName: \"kubernetes.io/projected/84f52e98-a6f5-4212-bff7-6980ee04ddaa-kube-api-access-vnpwd\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.180899 4880 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.181069 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-xxfdq" podStartSLOduration=5.181050152 podStartE2EDuration="5.181050152s" podCreationTimestamp="2025-12-01 03:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:15:42.091590492 +0000 UTC m=+1171.602844864" watchObservedRunningTime="2025-12-01 03:15:42.181050152 +0000 UTC m=+1171.692304524" Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.270293 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data" (OuterVolumeSpecName: "config-data") pod "84f52e98-a6f5-4212-bff7-6980ee04ddaa" (UID: "84f52e98-a6f5-4212-bff7-6980ee04ddaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:42 crc kubenswrapper[4880]: I1201 03:15:42.282052 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f52e98-a6f5-4212-bff7-6980ee04ddaa-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.041080 4880 generic.go:334] "Generic (PLEG): container finished" podID="0bef3f01-6342-40a7-9213-9358a20b7efe" containerID="1da8d8ab16010feb0c8d380da920b5076cba417a620ba24b4707433ebf27bcc5" exitCode=0 Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.041373 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xxfdq" event={"ID":"0bef3f01-6342-40a7-9213-9358a20b7efe","Type":"ContainerDied","Data":"1da8d8ab16010feb0c8d380da920b5076cba417a620ba24b4707433ebf27bcc5"} Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.043819 4880 generic.go:334] "Generic (PLEG): container finished" podID="aed214c1-1d55-4063-929f-2c4b1d88f025" containerID="baa7c517890d2c84316a0a440ac34017d06df3834f1072adb37fbd9055bddeec" exitCode=0 Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.043900 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2946-account-create-update-dq78p" event={"ID":"aed214c1-1d55-4063-929f-2c4b1d88f025","Type":"ContainerDied","Data":"baa7c517890d2c84316a0a440ac34017d06df3834f1072adb37fbd9055bddeec"} Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.046183 4880 generic.go:334] "Generic (PLEG): container finished" podID="38c5dd71-4081-4b8c-bb4c-c7e0087c7670" containerID="09b809bfc0a7a2581058550cf215e4a962ab6fd51fd44ce8fabc99e946daaab3" exitCode=0 Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.046283 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" event={"ID":"38c5dd71-4081-4b8c-bb4c-c7e0087c7670","Type":"ContainerDied","Data":"09b809bfc0a7a2581058550cf215e4a962ab6fd51fd44ce8fabc99e946daaab3"} Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.048654 4880 generic.go:334] "Generic (PLEG): container finished" podID="942e9ee3-6a18-426c-ad47-fe3ba8ae4213" containerID="7e15f927cfcac2b60574fb7db3763113f41cd33991e0e8229ad99cf186a04662" exitCode=0 Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.048910 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sfwjq" event={"ID":"942e9ee3-6a18-426c-ad47-fe3ba8ae4213","Type":"ContainerDied","Data":"7e15f927cfcac2b60574fb7db3763113f41cd33991e0e8229ad99cf186a04662"} Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.049036 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-f9df6f645-phc82" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.147997 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-f9df6f645-phc82"] Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.164048 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-f9df6f645-phc82"] Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.327102 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="e0a17482-31bd-4eb3-bbf0-db0a24905c39" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.180:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.518692 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.636515 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92kgt\" (UniqueName: \"kubernetes.io/projected/a2489b51-6f7f-4f48-b614-870ab86df12a-kube-api-access-92kgt\") pod \"a2489b51-6f7f-4f48-b614-870ab86df12a\" (UID: \"a2489b51-6f7f-4f48-b614-870ab86df12a\") " Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.636644 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2489b51-6f7f-4f48-b614-870ab86df12a-operator-scripts\") pod \"a2489b51-6f7f-4f48-b614-870ab86df12a\" (UID: \"a2489b51-6f7f-4f48-b614-870ab86df12a\") " Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.637968 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2489b51-6f7f-4f48-b614-870ab86df12a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2489b51-6f7f-4f48-b614-870ab86df12a" (UID: "a2489b51-6f7f-4f48-b614-870ab86df12a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.677039 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2489b51-6f7f-4f48-b614-870ab86df12a-kube-api-access-92kgt" (OuterVolumeSpecName: "kube-api-access-92kgt") pod "a2489b51-6f7f-4f48-b614-870ab86df12a" (UID: "a2489b51-6f7f-4f48-b614-870ab86df12a"). InnerVolumeSpecName "kube-api-access-92kgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.739862 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2489b51-6f7f-4f48-b614-870ab86df12a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.739911 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92kgt\" (UniqueName: \"kubernetes.io/projected/a2489b51-6f7f-4f48-b614-870ab86df12a-kube-api-access-92kgt\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.858075 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.862519 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.948418 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlz9z\" (UniqueName: \"kubernetes.io/projected/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-kube-api-access-tlz9z\") pod \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\" (UID: \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\") " Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.948458 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-operator-scripts\") pod \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\" (UID: \"942e9ee3-6a18-426c-ad47-fe3ba8ae4213\") " Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.948588 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa52299-6b4d-47d0-8250-4707b96770f9-operator-scripts\") pod \"4fa52299-6b4d-47d0-8250-4707b96770f9\" (UID: \"4fa52299-6b4d-47d0-8250-4707b96770f9\") " Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.948631 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9z9m\" (UniqueName: \"kubernetes.io/projected/4fa52299-6b4d-47d0-8250-4707b96770f9-kube-api-access-n9z9m\") pod \"4fa52299-6b4d-47d0-8250-4707b96770f9\" (UID: \"4fa52299-6b4d-47d0-8250-4707b96770f9\") " Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.950262 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "942e9ee3-6a18-426c-ad47-fe3ba8ae4213" (UID: "942e9ee3-6a18-426c-ad47-fe3ba8ae4213"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.950521 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa52299-6b4d-47d0-8250-4707b96770f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fa52299-6b4d-47d0-8250-4707b96770f9" (UID: "4fa52299-6b4d-47d0-8250-4707b96770f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.957045 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-kube-api-access-tlz9z" (OuterVolumeSpecName: "kube-api-access-tlz9z") pod "942e9ee3-6a18-426c-ad47-fe3ba8ae4213" (UID: "942e9ee3-6a18-426c-ad47-fe3ba8ae4213"). InnerVolumeSpecName "kube-api-access-tlz9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:43 crc kubenswrapper[4880]: I1201 03:15:43.961017 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa52299-6b4d-47d0-8250-4707b96770f9-kube-api-access-n9z9m" (OuterVolumeSpecName: "kube-api-access-n9z9m") pod "4fa52299-6b4d-47d0-8250-4707b96770f9" (UID: "4fa52299-6b4d-47d0-8250-4707b96770f9"). InnerVolumeSpecName "kube-api-access-n9z9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.050095 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlz9z\" (UniqueName: \"kubernetes.io/projected/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-kube-api-access-tlz9z\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.050124 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/942e9ee3-6a18-426c-ad47-fe3ba8ae4213-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.050133 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa52299-6b4d-47d0-8250-4707b96770f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.050145 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9z9m\" (UniqueName: \"kubernetes.io/projected/4fa52299-6b4d-47d0-8250-4707b96770f9-kube-api-access-n9z9m\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.058114 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dpchb" event={"ID":"a2489b51-6f7f-4f48-b614-870ab86df12a","Type":"ContainerDied","Data":"5c04206b77fa85519c3eef98930d3f91456f5ca03afc11974cfd06a140dc6fbf"} Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.058151 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c04206b77fa85519c3eef98930d3f91456f5ca03afc11974cfd06a140dc6fbf" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.058195 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dpchb" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.064712 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6403-account-create-update-89c7r" event={"ID":"4fa52299-6b4d-47d0-8250-4707b96770f9","Type":"ContainerDied","Data":"2d90c4b901f9ea929cccf8863e560a3989d6b5a1e69bcf8b99aaaa02a63cdee6"} Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.064749 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d90c4b901f9ea929cccf8863e560a3989d6b5a1e69bcf8b99aaaa02a63cdee6" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.064797 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6403-account-create-update-89c7r" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.069465 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sfwjq" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.070006 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sfwjq" event={"ID":"942e9ee3-6a18-426c-ad47-fe3ba8ae4213","Type":"ContainerDied","Data":"be51a3b1010ba4e6848a6206c4bec6f0d59e506946ebe3d68a1178f5c4b72282"} Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.070078 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be51a3b1010ba4e6848a6206c4bec6f0d59e506946ebe3d68a1178f5c4b72282" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.316475 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e0a17482-31bd-4eb3-bbf0-db0a24905c39" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.180:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.669259 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.769768 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.770673 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bef3f01-6342-40a7-9213-9358a20b7efe-operator-scripts\") pod \"0bef3f01-6342-40a7-9213-9358a20b7efe\" (UID: \"0bef3f01-6342-40a7-9213-9358a20b7efe\") " Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.770832 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njlgz\" (UniqueName: \"kubernetes.io/projected/0bef3f01-6342-40a7-9213-9358a20b7efe-kube-api-access-njlgz\") pod \"0bef3f01-6342-40a7-9213-9358a20b7efe\" (UID: \"0bef3f01-6342-40a7-9213-9358a20b7efe\") " Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.772101 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bef3f01-6342-40a7-9213-9358a20b7efe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bef3f01-6342-40a7-9213-9358a20b7efe" (UID: "0bef3f01-6342-40a7-9213-9358a20b7efe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.783482 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bef3f01-6342-40a7-9213-9358a20b7efe-kube-api-access-njlgz" (OuterVolumeSpecName: "kube-api-access-njlgz") pod "0bef3f01-6342-40a7-9213-9358a20b7efe" (UID: "0bef3f01-6342-40a7-9213-9358a20b7efe"). InnerVolumeSpecName "kube-api-access-njlgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.788833 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.812755 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" path="/var/lib/kubelet/pods/84f52e98-a6f5-4212-bff7-6980ee04ddaa/volumes" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.908491 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qf2c\" (UniqueName: \"kubernetes.io/projected/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-kube-api-access-6qf2c\") pod \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\" (UID: \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\") " Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.908607 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-operator-scripts\") pod \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\" (UID: \"38c5dd71-4081-4b8c-bb4c-c7e0087c7670\") " Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.908710 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed214c1-1d55-4063-929f-2c4b1d88f025-operator-scripts\") pod \"aed214c1-1d55-4063-929f-2c4b1d88f025\" (UID: \"aed214c1-1d55-4063-929f-2c4b1d88f025\") " Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.908740 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9xjb\" (UniqueName: \"kubernetes.io/projected/aed214c1-1d55-4063-929f-2c4b1d88f025-kube-api-access-n9xjb\") pod \"aed214c1-1d55-4063-929f-2c4b1d88f025\" (UID: \"aed214c1-1d55-4063-929f-2c4b1d88f025\") " Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.909382 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bef3f01-6342-40a7-9213-9358a20b7efe-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.909398 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njlgz\" (UniqueName: \"kubernetes.io/projected/0bef3f01-6342-40a7-9213-9358a20b7efe-kube-api-access-njlgz\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.918350 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38c5dd71-4081-4b8c-bb4c-c7e0087c7670" (UID: "38c5dd71-4081-4b8c-bb4c-c7e0087c7670"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.919130 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed214c1-1d55-4063-929f-2c4b1d88f025-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aed214c1-1d55-4063-929f-2c4b1d88f025" (UID: "aed214c1-1d55-4063-929f-2c4b1d88f025"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.972477 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.975410 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-kube-api-access-6qf2c" (OuterVolumeSpecName: "kube-api-access-6qf2c") pod "38c5dd71-4081-4b8c-bb4c-c7e0087c7670" (UID: "38c5dd71-4081-4b8c-bb4c-c7e0087c7670"). InnerVolumeSpecName "kube-api-access-6qf2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:44 crc kubenswrapper[4880]: I1201 03:15:44.980253 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed214c1-1d55-4063-929f-2c4b1d88f025-kube-api-access-n9xjb" (OuterVolumeSpecName: "kube-api-access-n9xjb") pod "aed214c1-1d55-4063-929f-2c4b1d88f025" (UID: "aed214c1-1d55-4063-929f-2c4b1d88f025"). InnerVolumeSpecName "kube-api-access-n9xjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.014859 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-run-httpd\") pod \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.014996 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-sg-core-conf-yaml\") pod \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.015255 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.015270 4880 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aed214c1-1d55-4063-929f-2c4b1d88f025-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.015281 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9xjb\" (UniqueName: \"kubernetes.io/projected/aed214c1-1d55-4063-929f-2c4b1d88f025-kube-api-access-n9xjb\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.015292 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qf2c\" (UniqueName: \"kubernetes.io/projected/38c5dd71-4081-4b8c-bb4c-c7e0087c7670-kube-api-access-6qf2c\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.015781 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b493aa6-cf3f-497d-974e-d0e06f99c41b" (UID: "3b493aa6-cf3f-497d-974e-d0e06f99c41b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.063008 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b493aa6-cf3f-497d-974e-d0e06f99c41b" (UID: "3b493aa6-cf3f-497d-974e-d0e06f99c41b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.106758 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xxfdq" event={"ID":"0bef3f01-6342-40a7-9213-9358a20b7efe","Type":"ContainerDied","Data":"cdec7a982c701c587302cc4e45da434fb1836efa11f801e2cc724531b8fb0426"} Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.106962 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdec7a982c701c587302cc4e45da434fb1836efa11f801e2cc724531b8fb0426" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.107023 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xxfdq" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.111629 4880 generic.go:334] "Generic (PLEG): container finished" podID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerID="e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6" exitCode=0 Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.111719 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerDied","Data":"e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6"} Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.111771 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b493aa6-cf3f-497d-974e-d0e06f99c41b","Type":"ContainerDied","Data":"bbea0488ae6764725c12c0543a00f3ca0f0f36c3e2a225862e69aace2c068924"} Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.111788 4880 scope.go:117] "RemoveContainer" containerID="c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.112003 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.115785 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-combined-ca-bundle\") pod \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.115898 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-config-data\") pod \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.116019 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-scripts\") pod \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.116127 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-log-httpd\") pod \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.116202 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j85xt\" (UniqueName: \"kubernetes.io/projected/3b493aa6-cf3f-497d-974e-d0e06f99c41b-kube-api-access-j85xt\") pod \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\" (UID: \"3b493aa6-cf3f-497d-974e-d0e06f99c41b\") " Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.116703 4880 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.117002 4880 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.118640 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2946-account-create-update-dq78p" event={"ID":"aed214c1-1d55-4063-929f-2c4b1d88f025","Type":"ContainerDied","Data":"7ff06e5b3d05b1b41a324d744530a68b4cc8b8bbf159fc1eb5ebb76c007407a0"} Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.118675 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff06e5b3d05b1b41a324d744530a68b4cc8b8bbf159fc1eb5ebb76c007407a0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.118747 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2946-account-create-update-dq78p" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.120646 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" event={"ID":"38c5dd71-4081-4b8c-bb4c-c7e0087c7670","Type":"ContainerDied","Data":"94ac393f3a791769d1e0c0b962bfd7a6d22d1459f95189fa31bc774add832eac"} Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.120673 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94ac393f3a791769d1e0c0b962bfd7a6d22d1459f95189fa31bc774add832eac" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.120735 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e8e4-account-create-update-2vhq8" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.125860 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b493aa6-cf3f-497d-974e-d0e06f99c41b-kube-api-access-j85xt" (OuterVolumeSpecName: "kube-api-access-j85xt") pod "3b493aa6-cf3f-497d-974e-d0e06f99c41b" (UID: "3b493aa6-cf3f-497d-974e-d0e06f99c41b"). InnerVolumeSpecName "kube-api-access-j85xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.126234 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b493aa6-cf3f-497d-974e-d0e06f99c41b" (UID: "3b493aa6-cf3f-497d-974e-d0e06f99c41b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.141151 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-scripts" (OuterVolumeSpecName: "scripts") pod "3b493aa6-cf3f-497d-974e-d0e06f99c41b" (UID: "3b493aa6-cf3f-497d-974e-d0e06f99c41b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.207940 4880 scope.go:117] "RemoveContainer" containerID="eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.219352 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.223122 4880 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b493aa6-cf3f-497d-974e-d0e06f99c41b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.223192 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j85xt\" (UniqueName: \"kubernetes.io/projected/3b493aa6-cf3f-497d-974e-d0e06f99c41b-kube-api-access-j85xt\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.260065 4880 scope.go:117] "RemoveContainer" containerID="7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.301860 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b493aa6-cf3f-497d-974e-d0e06f99c41b" (UID: "3b493aa6-cf3f-497d-974e-d0e06f99c41b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.316031 4880 scope.go:117] "RemoveContainer" containerID="e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.326476 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.326973 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-config-data" (OuterVolumeSpecName: "config-data") pod "3b493aa6-cf3f-497d-974e-d0e06f99c41b" (UID: "3b493aa6-cf3f-497d-974e-d0e06f99c41b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.354343 4880 scope.go:117] "RemoveContainer" containerID="c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.358516 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58\": container with ID starting with c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58 not found: ID does not exist" containerID="c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.358560 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58"} err="failed to get container status \"c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58\": rpc error: code = NotFound desc = could not find container \"c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58\": container with ID starting with c3d27a8e849339c961e99d906f86a9ea3e46929e17b43d29bc8d0e644d852d58 not found: ID does not exist" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.358587 4880 scope.go:117] "RemoveContainer" containerID="eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.358865 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8\": container with ID starting with eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8 not found: ID does not exist" containerID="eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.358900 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8"} err="failed to get container status \"eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8\": rpc error: code = NotFound desc = could not find container \"eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8\": container with ID starting with eb425ed4bfff21c5e8be4e811bd0192594805373958403b75cae7ee6261596d8 not found: ID does not exist" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.358915 4880 scope.go:117] "RemoveContainer" containerID="7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.359109 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6\": container with ID starting with 7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6 not found: ID does not exist" containerID="7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.359124 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6"} err="failed to get container status \"7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6\": rpc error: code = NotFound desc = could not find container \"7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6\": container with ID starting with 7adeb8fdec3dedd0be9c073b98a1ca0f8bb949109b355b2d0da5ea530447a7d6 not found: ID does not exist" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.359135 4880 scope.go:117] "RemoveContainer" containerID="e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.359743 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6\": container with ID starting with e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6 not found: ID does not exist" containerID="e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.359780 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6"} err="failed to get container status \"e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6\": rpc error: code = NotFound desc = could not find container \"e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6\": container with ID starting with e84b7d619e057bfb2bf890f2a865462c2d2d78a21c6d8db8b96a3aab4c9449a6 not found: ID does not exist" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.427985 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b493aa6-cf3f-497d-974e-d0e06f99c41b-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.450817 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.461313 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.475947 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.476501 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" containerName="heat-cfnapi" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.476563 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" containerName="heat-cfnapi" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.476621 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" containerName="heat-cfnapi" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.477042 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" containerName="heat-cfnapi" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.477506 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="942e9ee3-6a18-426c-ad47-fe3ba8ae4213" containerName="mariadb-database-create" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.477677 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="942e9ee3-6a18-426c-ad47-fe3ba8ae4213" containerName="mariadb-database-create" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.477761 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="ceilometer-notification-agent" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.477820 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="ceilometer-notification-agent" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.477907 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa52299-6b4d-47d0-8250-4707b96770f9" containerName="mariadb-account-create-update" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.477971 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa52299-6b4d-47d0-8250-4707b96770f9" containerName="mariadb-account-create-update" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.478033 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2489b51-6f7f-4f48-b614-870ab86df12a" containerName="mariadb-database-create" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.478135 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2489b51-6f7f-4f48-b614-870ab86df12a" containerName="mariadb-database-create" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.478213 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c5dd71-4081-4b8c-bb4c-c7e0087c7670" containerName="mariadb-account-create-update" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.478278 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c5dd71-4081-4b8c-bb4c-c7e0087c7670" containerName="mariadb-account-create-update" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.478362 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" containerName="heat-api" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.478461 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" containerName="heat-api" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.478564 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="sg-core" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.478651 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="sg-core" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.478749 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed214c1-1d55-4063-929f-2c4b1d88f025" containerName="mariadb-account-create-update" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.478897 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed214c1-1d55-4063-929f-2c4b1d88f025" containerName="mariadb-account-create-update" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.478993 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="proxy-httpd" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.481120 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="proxy-httpd" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.481281 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" containerName="heat-api" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.482405 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" containerName="heat-api" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.482547 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="ceilometer-central-agent" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.482631 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="ceilometer-central-agent" Dec 01 03:15:45 crc kubenswrapper[4880]: E1201 03:15:45.482701 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bef3f01-6342-40a7-9213-9358a20b7efe" containerName="mariadb-database-create" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.482784 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bef3f01-6342-40a7-9213-9358a20b7efe" containerName="mariadb-database-create" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483274 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="ceilometer-notification-agent" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483358 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" containerName="heat-api" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483433 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bef3f01-6342-40a7-9213-9358a20b7efe" containerName="mariadb-database-create" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483501 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="proxy-httpd" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483563 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" containerName="heat-cfnapi" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483630 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2489b51-6f7f-4f48-b614-870ab86df12a" containerName="mariadb-database-create" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483693 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed214c1-1d55-4063-929f-2c4b1d88f025" containerName="mariadb-account-create-update" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483756 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c5dd71-4081-4b8c-bb4c-c7e0087c7670" containerName="mariadb-account-create-update" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483818 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="sg-core" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483891 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f52e98-a6f5-4212-bff7-6980ee04ddaa" containerName="heat-cfnapi" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.483948 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" containerName="ceilometer-central-agent" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.484006 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="942e9ee3-6a18-426c-ad47-fe3ba8ae4213" containerName="mariadb-database-create" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.484063 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa52299-6b4d-47d0-8250-4707b96770f9" containerName="mariadb-account-create-update" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.484435 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" containerName="heat-api" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.486750 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.489215 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.489388 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.491095 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.633595 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-scripts\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.633649 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-log-httpd\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.633670 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-run-httpd\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.633689 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wflk2\" (UniqueName: \"kubernetes.io/projected/042ca337-471d-4796-9cc7-2561bd51219e-kube-api-access-wflk2\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.633784 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-config-data\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.633807 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.633856 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.734895 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.734992 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-scripts\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.735023 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-log-httpd\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.735042 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-run-httpd\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.735059 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wflk2\" (UniqueName: \"kubernetes.io/projected/042ca337-471d-4796-9cc7-2561bd51219e-kube-api-access-wflk2\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.735114 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-config-data\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.735143 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.735891 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-run-httpd\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.735959 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-log-httpd\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.738729 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-scripts\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.738788 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.741028 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.741268 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-config-data\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.761530 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wflk2\" (UniqueName: \"kubernetes.io/projected/042ca337-471d-4796-9cc7-2561bd51219e-kube-api-access-wflk2\") pod \"ceilometer-0\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " pod="openstack/ceilometer-0" Dec 01 03:15:45 crc kubenswrapper[4880]: I1201 03:15:45.806063 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:15:46 crc kubenswrapper[4880]: I1201 03:15:46.405094 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:46 crc kubenswrapper[4880]: I1201 03:15:46.792774 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b493aa6-cf3f-497d-974e-d0e06f99c41b" path="/var/lib/kubelet/pods/3b493aa6-cf3f-497d-974e-d0e06f99c41b/volumes" Dec 01 03:15:47 crc kubenswrapper[4880]: I1201 03:15:47.151256 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerStarted","Data":"ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791"} Dec 01 03:15:47 crc kubenswrapper[4880]: I1201 03:15:47.151313 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerStarted","Data":"bb65bc40fac0877d74f8e2ef35290b305557bd9db587b024b3eac434b0993dc1"} Dec 01 03:15:47 crc kubenswrapper[4880]: I1201 03:15:47.928274 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.209546 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerStarted","Data":"f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b"} Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.483069 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nj9mn"] Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.484202 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.487086 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.487120 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mcw5c" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.487238 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.504697 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nj9mn"] Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.590699 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.590769 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-scripts\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.590857 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-config-data\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.590937 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhjj\" (UniqueName: \"kubernetes.io/projected/b2e7d851-5607-477c-a499-bee4568e24c2-kube-api-access-slhjj\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.692707 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-config-data\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.693003 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slhjj\" (UniqueName: \"kubernetes.io/projected/b2e7d851-5607-477c-a499-bee4568e24c2-kube-api-access-slhjj\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.693110 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.693203 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-scripts\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.702500 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.705488 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-config-data\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.714329 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-scripts\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.720167 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slhjj\" (UniqueName: \"kubernetes.io/projected/b2e7d851-5607-477c-a499-bee4568e24c2-kube-api-access-slhjj\") pod \"nova-cell0-conductor-db-sync-nj9mn\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.722070 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:15:48 crc kubenswrapper[4880]: I1201 03:15:48.801521 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:15:49 crc kubenswrapper[4880]: I1201 03:15:49.218215 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerStarted","Data":"3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42"} Dec 01 03:15:49 crc kubenswrapper[4880]: W1201 03:15:49.310162 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e7d851_5607_477c_a499_bee4568e24c2.slice/crio-85b855207a68c04bcab0ec15f928071ad0497af7285a551a3ebc54793ebbb34a WatchSource:0}: Error finding container 85b855207a68c04bcab0ec15f928071ad0497af7285a551a3ebc54793ebbb34a: Status 404 returned error can't find the container with id 85b855207a68c04bcab0ec15f928071ad0497af7285a551a3ebc54793ebbb34a Dec 01 03:15:49 crc kubenswrapper[4880]: I1201 03:15:49.312819 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nj9mn"] Dec 01 03:15:50 crc kubenswrapper[4880]: I1201 03:15:50.230310 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerStarted","Data":"55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3"} Dec 01 03:15:50 crc kubenswrapper[4880]: I1201 03:15:50.230696 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="ceilometer-central-agent" containerID="cri-o://ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791" gracePeriod=30 Dec 01 03:15:50 crc kubenswrapper[4880]: I1201 03:15:50.230808 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 03:15:50 crc kubenswrapper[4880]: I1201 03:15:50.230832 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="proxy-httpd" containerID="cri-o://55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3" gracePeriod=30 Dec 01 03:15:50 crc kubenswrapper[4880]: I1201 03:15:50.230865 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="sg-core" containerID="cri-o://3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42" gracePeriod=30 Dec 01 03:15:50 crc kubenswrapper[4880]: I1201 03:15:50.230917 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="ceilometer-notification-agent" containerID="cri-o://f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b" gracePeriod=30 Dec 01 03:15:50 crc kubenswrapper[4880]: I1201 03:15:50.241494 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nj9mn" event={"ID":"b2e7d851-5607-477c-a499-bee4568e24c2","Type":"ContainerStarted","Data":"85b855207a68c04bcab0ec15f928071ad0497af7285a551a3ebc54793ebbb34a"} Dec 01 03:15:50 crc kubenswrapper[4880]: I1201 03:15:50.261618 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.980866594 podStartE2EDuration="5.261599893s" podCreationTimestamp="2025-12-01 03:15:45 +0000 UTC" firstStartedPulling="2025-12-01 03:15:46.417801497 +0000 UTC m=+1175.929055869" lastFinishedPulling="2025-12-01 03:15:49.698534796 +0000 UTC m=+1179.209789168" observedRunningTime="2025-12-01 03:15:50.251659602 +0000 UTC m=+1179.762913974" watchObservedRunningTime="2025-12-01 03:15:50.261599893 +0000 UTC m=+1179.772854265" Dec 01 03:15:50 crc kubenswrapper[4880]: E1201 03:15:50.801952 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 01 03:15:50 crc kubenswrapper[4880]: E1201 03:15:50.816403 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 01 03:15:50 crc kubenswrapper[4880]: E1201 03:15:50.824719 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 01 03:15:50 crc kubenswrapper[4880]: E1201 03:15:50.824805 4880 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-657545ccb7-km728" podUID="f3ed19fa-d784-48ed-8770-35c150a1a24e" containerName="heat-engine" Dec 01 03:15:50 crc kubenswrapper[4880]: E1201 03:15:50.904708 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod042ca337_471d_4796_9cc7_2561bd51219e.slice/crio-conmon-f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b.scope\": RecentStats: unable to find data in memory cache]" Dec 01 03:15:51 crc kubenswrapper[4880]: I1201 03:15:51.250770 4880 generic.go:334] "Generic (PLEG): container finished" podID="042ca337-471d-4796-9cc7-2561bd51219e" containerID="55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3" exitCode=0 Dec 01 03:15:51 crc kubenswrapper[4880]: I1201 03:15:51.250802 4880 generic.go:334] "Generic (PLEG): container finished" podID="042ca337-471d-4796-9cc7-2561bd51219e" containerID="3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42" exitCode=2 Dec 01 03:15:51 crc kubenswrapper[4880]: I1201 03:15:51.250843 4880 generic.go:334] "Generic (PLEG): container finished" podID="042ca337-471d-4796-9cc7-2561bd51219e" containerID="f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b" exitCode=0 Dec 01 03:15:51 crc kubenswrapper[4880]: I1201 03:15:51.250864 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerDied","Data":"55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3"} Dec 01 03:15:51 crc kubenswrapper[4880]: I1201 03:15:51.250902 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerDied","Data":"3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42"} Dec 01 03:15:51 crc kubenswrapper[4880]: I1201 03:15:51.250910 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerDied","Data":"f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b"} Dec 01 03:15:52 crc kubenswrapper[4880]: I1201 03:15:52.713324 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:15:52 crc kubenswrapper[4880]: I1201 03:15:52.715208 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerName="glance-log" containerID="cri-o://ddce1a415abf9040e0ac23833658699ba84695c544be7d0c19978a697da91b13" gracePeriod=30 Dec 01 03:15:52 crc kubenswrapper[4880]: I1201 03:15:52.715276 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerName="glance-httpd" containerID="cri-o://0404c3bbd5eeb73d60c7b1258d7e7601e35d3db5a09b65495772b561ddbf142e" gracePeriod=30 Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.173004 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.197462 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data-custom\") pod \"f3ed19fa-d784-48ed-8770-35c150a1a24e\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.197697 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-combined-ca-bundle\") pod \"f3ed19fa-d784-48ed-8770-35c150a1a24e\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.197899 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvf6m\" (UniqueName: \"kubernetes.io/projected/f3ed19fa-d784-48ed-8770-35c150a1a24e-kube-api-access-cvf6m\") pod \"f3ed19fa-d784-48ed-8770-35c150a1a24e\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.197921 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data\") pod \"f3ed19fa-d784-48ed-8770-35c150a1a24e\" (UID: \"f3ed19fa-d784-48ed-8770-35c150a1a24e\") " Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.241007 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ed19fa-d784-48ed-8770-35c150a1a24e-kube-api-access-cvf6m" (OuterVolumeSpecName: "kube-api-access-cvf6m") pod "f3ed19fa-d784-48ed-8770-35c150a1a24e" (UID: "f3ed19fa-d784-48ed-8770-35c150a1a24e"). InnerVolumeSpecName "kube-api-access-cvf6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.244017 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3ed19fa-d784-48ed-8770-35c150a1a24e" (UID: "f3ed19fa-d784-48ed-8770-35c150a1a24e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.273019 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3ed19fa-d784-48ed-8770-35c150a1a24e" (UID: "f3ed19fa-d784-48ed-8770-35c150a1a24e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.279260 4880 generic.go:334] "Generic (PLEG): container finished" podID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerID="ddce1a415abf9040e0ac23833658699ba84695c544be7d0c19978a697da91b13" exitCode=143 Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.279617 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de","Type":"ContainerDied","Data":"ddce1a415abf9040e0ac23833658699ba84695c544be7d0c19978a697da91b13"} Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.287157 4880 generic.go:334] "Generic (PLEG): container finished" podID="f3ed19fa-d784-48ed-8770-35c150a1a24e" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" exitCode=0 Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.287181 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-657545ccb7-km728" event={"ID":"f3ed19fa-d784-48ed-8770-35c150a1a24e","Type":"ContainerDied","Data":"a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359"} Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.287195 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-657545ccb7-km728" event={"ID":"f3ed19fa-d784-48ed-8770-35c150a1a24e","Type":"ContainerDied","Data":"fb9a35fde5b66eb3e5475553005e1360f925f05882505e14ac64144e32439f61"} Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.287211 4880 scope.go:117] "RemoveContainer" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.287321 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-657545ccb7-km728" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.300342 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvf6m\" (UniqueName: \"kubernetes.io/projected/f3ed19fa-d784-48ed-8770-35c150a1a24e-kube-api-access-cvf6m\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.300372 4880 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.300384 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.311586 4880 scope.go:117] "RemoveContainer" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" Dec 01 03:15:53 crc kubenswrapper[4880]: E1201 03:15:53.312658 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359\": container with ID starting with a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359 not found: ID does not exist" containerID="a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.312682 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359"} err="failed to get container status \"a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359\": rpc error: code = NotFound desc = could not find container \"a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359\": container with ID starting with a261f3c06c445907763c641acedf2376d0921dba17be4e432ce285640e583359 not found: ID does not exist" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.345933 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data" (OuterVolumeSpecName: "config-data") pod "f3ed19fa-d784-48ed-8770-35c150a1a24e" (UID: "f3ed19fa-d784-48ed-8770-35c150a1a24e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.402215 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ed19fa-d784-48ed-8770-35c150a1a24e-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.626124 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-657545ccb7-km728"] Dec 01 03:15:53 crc kubenswrapper[4880]: I1201 03:15:53.634660 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-657545ccb7-km728"] Dec 01 03:15:54 crc kubenswrapper[4880]: I1201 03:15:54.816661 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ed19fa-d784-48ed-8770-35c150a1a24e" path="/var/lib/kubelet/pods/f3ed19fa-d784-48ed-8770-35c150a1a24e/volumes" Dec 01 03:15:54 crc kubenswrapper[4880]: I1201 03:15:54.923616 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:15:54 crc kubenswrapper[4880]: I1201 03:15:54.923835 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerName="glance-log" containerID="cri-o://eb51283a0e8691b394f87355ab890692736063b68e69d530ad40b7746aab5d28" gracePeriod=30 Dec 01 03:15:54 crc kubenswrapper[4880]: I1201 03:15:54.925727 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerName="glance-httpd" containerID="cri-o://315f976ef09fb985020ef772d3f5bbfb7037c0dc122146af2ac3486e60af9d21" gracePeriod=30 Dec 01 03:15:55 crc kubenswrapper[4880]: I1201 03:15:55.322740 4880 generic.go:334] "Generic (PLEG): container finished" podID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerID="eb51283a0e8691b394f87355ab890692736063b68e69d530ad40b7746aab5d28" exitCode=143 Dec 01 03:15:55 crc kubenswrapper[4880]: I1201 03:15:55.322825 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39","Type":"ContainerDied","Data":"eb51283a0e8691b394f87355ab890692736063b68e69d530ad40b7746aab5d28"} Dec 01 03:15:56 crc kubenswrapper[4880]: I1201 03:15:56.333434 4880 generic.go:334] "Generic (PLEG): container finished" podID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerID="0404c3bbd5eeb73d60c7b1258d7e7601e35d3db5a09b65495772b561ddbf142e" exitCode=0 Dec 01 03:15:56 crc kubenswrapper[4880]: I1201 03:15:56.333474 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de","Type":"ContainerDied","Data":"0404c3bbd5eeb73d60c7b1258d7e7601e35d3db5a09b65495772b561ddbf142e"} Dec 01 03:15:58 crc kubenswrapper[4880]: I1201 03:15:58.351345 4880 generic.go:334] "Generic (PLEG): container finished" podID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerID="315f976ef09fb985020ef772d3f5bbfb7037c0dc122146af2ac3486e60af9d21" exitCode=0 Dec 01 03:15:58 crc kubenswrapper[4880]: I1201 03:15:58.351418 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39","Type":"ContainerDied","Data":"315f976ef09fb985020ef772d3f5bbfb7037c0dc122146af2ac3486e60af9d21"} Dec 01 03:15:59 crc kubenswrapper[4880]: I1201 03:15:59.367957 4880 generic.go:334] "Generic (PLEG): container finished" podID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerID="35441ddb895b7c9641ccfb5abb51fc60e28765d8c6dc4e3ba059e57ec43f3d30" exitCode=137 Dec 01 03:15:59 crc kubenswrapper[4880]: I1201 03:15:59.368207 4880 generic.go:334] "Generic (PLEG): container finished" podID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerID="a21857edb278cf8f3e444c37932515dbca958eaea72385984220acbeafa3688d" exitCode=137 Dec 01 03:15:59 crc kubenswrapper[4880]: I1201 03:15:59.368171 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ddc7fc844-5qd9h" event={"ID":"182db9c6-4756-4acb-a228-a1fe3fe7a4dd","Type":"ContainerDied","Data":"35441ddb895b7c9641ccfb5abb51fc60e28765d8c6dc4e3ba059e57ec43f3d30"} Dec 01 03:15:59 crc kubenswrapper[4880]: I1201 03:15:59.368249 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ddc7fc844-5qd9h" event={"ID":"182db9c6-4756-4acb-a228-a1fe3fe7a4dd","Type":"ContainerDied","Data":"a21857edb278cf8f3e444c37932515dbca958eaea72385984220acbeafa3688d"} Dec 01 03:15:59 crc kubenswrapper[4880]: I1201 03:15:59.368287 4880 scope.go:117] "RemoveContainer" containerID="ca4abb4a90b26185324b9145545abeafcf27374b78455ab6064a09cf34a460ca" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.196279 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.321858 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.329348 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-config-data\") pod \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.329393 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xtxq\" (UniqueName: \"kubernetes.io/projected/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-kube-api-access-5xtxq\") pod \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.329572 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-scripts\") pod \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.329617 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-tls-certs\") pod \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.329654 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-secret-key\") pod \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.329677 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-combined-ca-bundle\") pod \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.329715 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-logs\") pod \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\" (UID: \"182db9c6-4756-4acb-a228-a1fe3fe7a4dd\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.331016 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-logs" (OuterVolumeSpecName: "logs") pod "182db9c6-4756-4acb-a228-a1fe3fe7a4dd" (UID: "182db9c6-4756-4acb-a228-a1fe3fe7a4dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.337866 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-kube-api-access-5xtxq" (OuterVolumeSpecName: "kube-api-access-5xtxq") pod "182db9c6-4756-4acb-a228-a1fe3fe7a4dd" (UID: "182db9c6-4756-4acb-a228-a1fe3fe7a4dd"). InnerVolumeSpecName "kube-api-access-5xtxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.339053 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "182db9c6-4756-4acb-a228-a1fe3fe7a4dd" (UID: "182db9c6-4756-4acb-a228-a1fe3fe7a4dd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.388619 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nj9mn" event={"ID":"b2e7d851-5607-477c-a499-bee4568e24c2","Type":"ContainerStarted","Data":"4dc1f945c882a4d7c06c75fa753538910c0ddfaf0e27da2fa29be03cd5691e2b"} Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.392624 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-config-data" (OuterVolumeSpecName: "config-data") pod "182db9c6-4756-4acb-a228-a1fe3fe7a4dd" (UID: "182db9c6-4756-4acb-a228-a1fe3fe7a4dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.392888 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ddc7fc844-5qd9h" event={"ID":"182db9c6-4756-4acb-a228-a1fe3fe7a4dd","Type":"ContainerDied","Data":"92054959502be3c0ce330e1a0475d8371e7ca9a6b7e4dd77ecf8476f920cd048"} Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.392921 4880 scope.go:117] "RemoveContainer" containerID="35441ddb895b7c9641ccfb5abb51fc60e28765d8c6dc4e3ba059e57ec43f3d30" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.392993 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ddc7fc844-5qd9h" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.395068 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-scripts" (OuterVolumeSpecName: "scripts") pod "182db9c6-4756-4acb-a228-a1fe3fe7a4dd" (UID: "182db9c6-4756-4acb-a228-a1fe3fe7a4dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.397824 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39","Type":"ContainerDied","Data":"6dc94bd57d6322bebd89779f140ddfaaf30910dcc3ed0cfe6d660927f793dda0"} Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.397907 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.402536 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "182db9c6-4756-4acb-a228-a1fe3fe7a4dd" (UID: "182db9c6-4756-4acb-a228-a1fe3fe7a4dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.430022 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "182db9c6-4756-4acb-a228-a1fe3fe7a4dd" (UID: "182db9c6-4756-4acb-a228-a1fe3fe7a4dd"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.430576 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-httpd-run\") pod \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.430656 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-config-data\") pod \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.430708 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-internal-tls-certs\") pod \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.430740 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.430758 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-combined-ca-bundle\") pod \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.430833 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-logs\") pod \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.430915 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvrnk\" (UniqueName: \"kubernetes.io/projected/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-kube-api-access-xvrnk\") pod \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.430946 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-scripts\") pod \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\" (UID: \"a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39\") " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.431280 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" (UID: "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.431361 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.431374 4880 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.431384 4880 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.431393 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.431401 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.431408 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.431418 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xtxq\" (UniqueName: \"kubernetes.io/projected/182db9c6-4756-4acb-a228-a1fe3fe7a4dd-kube-api-access-5xtxq\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.434223 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-logs" (OuterVolumeSpecName: "logs") pod "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" (UID: "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.435994 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" (UID: "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.440833 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-kube-api-access-xvrnk" (OuterVolumeSpecName: "kube-api-access-xvrnk") pod "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" (UID: "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39"). InnerVolumeSpecName "kube-api-access-xvrnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.442049 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-scripts" (OuterVolumeSpecName: "scripts") pod "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" (UID: "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.475756 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" (UID: "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.517462 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" (UID: "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.533572 4880 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.533610 4880 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.533620 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.533628 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.533636 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvrnk\" (UniqueName: \"kubernetes.io/projected/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-kube-api-access-xvrnk\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.533645 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.533655 4880 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.543597 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-config-data" (OuterVolumeSpecName: "config-data") pod "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" (UID: "a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.552168 4880 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.611813 4880 scope.go:117] "RemoveContainer" containerID="a21857edb278cf8f3e444c37932515dbca958eaea72385984220acbeafa3688d" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.633075 4880 scope.go:117] "RemoveContainer" containerID="315f976ef09fb985020ef772d3f5bbfb7037c0dc122146af2ac3486e60af9d21" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.634771 4880 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.634800 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.658787 4880 scope.go:117] "RemoveContainer" containerID="eb51283a0e8691b394f87355ab890692736063b68e69d530ad40b7746aab5d28" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.726173 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nj9mn" podStartSLOduration=2.169147617 podStartE2EDuration="12.726156596s" podCreationTimestamp="2025-12-01 03:15:48 +0000 UTC" firstStartedPulling="2025-12-01 03:15:49.311306233 +0000 UTC m=+1178.822560605" lastFinishedPulling="2025-12-01 03:15:59.868315212 +0000 UTC m=+1189.379569584" observedRunningTime="2025-12-01 03:16:00.420840982 +0000 UTC m=+1189.932095354" watchObservedRunningTime="2025-12-01 03:16:00.726156596 +0000 UTC m=+1190.237410968" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.730182 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.738336 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.774900 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6ddc7fc844-5qd9h"] Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.799120 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" path="/var/lib/kubelet/pods/a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39/volumes" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.800129 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6ddc7fc844-5qd9h"] Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.820929 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:16:00 crc kubenswrapper[4880]: E1201 03:16:00.821352 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerName="glance-httpd" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821363 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerName="glance-httpd" Dec 01 03:16:00 crc kubenswrapper[4880]: E1201 03:16:00.821392 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon-log" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821398 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon-log" Dec 01 03:16:00 crc kubenswrapper[4880]: E1201 03:16:00.821411 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821417 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" Dec 01 03:16:00 crc kubenswrapper[4880]: E1201 03:16:00.821424 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ed19fa-d784-48ed-8770-35c150a1a24e" containerName="heat-engine" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821430 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ed19fa-d784-48ed-8770-35c150a1a24e" containerName="heat-engine" Dec 01 03:16:00 crc kubenswrapper[4880]: E1201 03:16:00.821443 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821449 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" Dec 01 03:16:00 crc kubenswrapper[4880]: E1201 03:16:00.821464 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerName="glance-log" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821472 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerName="glance-log" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821636 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerName="glance-log" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821652 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821661 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821677 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d0c9b0-2962-4e22-ae6d-3ffd2ee1ef39" containerName="glance-httpd" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821689 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" containerName="horizon-log" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.821699 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ed19fa-d784-48ed-8770-35c150a1a24e" containerName="heat-engine" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.822624 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.830450 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.830636 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.834574 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.940657 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xztzx\" (UniqueName: \"kubernetes.io/projected/bd74f03a-5371-4ae1-b044-c7cf3cacd388-kube-api-access-xztzx\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.940700 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd74f03a-5371-4ae1-b044-c7cf3cacd388-logs\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.940756 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.940814 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.940832 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.940857 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd74f03a-5371-4ae1-b044-c7cf3cacd388-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.940945 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:00 crc kubenswrapper[4880]: I1201 03:16:00.940976 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.043682 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.043760 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.043782 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.043811 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd74f03a-5371-4ae1-b044-c7cf3cacd388-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.043838 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.044938 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.047854 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.047933 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xztzx\" (UniqueName: \"kubernetes.io/projected/bd74f03a-5371-4ae1-b044-c7cf3cacd388-kube-api-access-xztzx\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.047961 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd74f03a-5371-4ae1-b044-c7cf3cacd388-logs\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.051164 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd74f03a-5371-4ae1-b044-c7cf3cacd388-logs\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.051566 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd74f03a-5371-4ae1-b044-c7cf3cacd388-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.054501 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.055551 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.074893 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xztzx\" (UniqueName: \"kubernetes.io/projected/bd74f03a-5371-4ae1-b044-c7cf3cacd388-kube-api-access-xztzx\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.115658 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.116282 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd74f03a-5371-4ae1-b044-c7cf3cacd388-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.124457 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"bd74f03a-5371-4ae1-b044-c7cf3cacd388\") " pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.166188 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.210412 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.258955 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94ljj\" (UniqueName: \"kubernetes.io/projected/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-kube-api-access-94ljj\") pod \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.259048 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-combined-ca-bundle\") pod \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.259162 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-logs\") pod \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.259196 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-scripts\") pod \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.300567 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.300938 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-httpd-run\") pod \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.301048 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-public-tls-certs\") pod \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.301081 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-config-data\") pod \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\" (UID: \"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de\") " Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.270183 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-logs" (OuterVolumeSpecName: "logs") pod "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" (UID: "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.314099 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-kube-api-access-94ljj" (OuterVolumeSpecName: "kube-api-access-94ljj") pod "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" (UID: "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de"). InnerVolumeSpecName "kube-api-access-94ljj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.324946 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" (UID: "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.336688 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-scripts" (OuterVolumeSpecName: "scripts") pod "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" (UID: "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.341723 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" (UID: "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.415746 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.415773 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.415799 4880 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.415809 4880 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.415821 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94ljj\" (UniqueName: \"kubernetes.io/projected/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-kube-api-access-94ljj\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.419960 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de","Type":"ContainerDied","Data":"c80885deee2b1bbce85915877c215dd4fdbbe14938a4347cd26425aaeac5ee74"} Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.420252 4880 scope.go:117] "RemoveContainer" containerID="0404c3bbd5eeb73d60c7b1258d7e7601e35d3db5a09b65495772b561ddbf142e" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.420126 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-config-data" (OuterVolumeSpecName: "config-data") pod "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" (UID: "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.420071 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.439228 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" (UID: "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.454797 4880 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.455077 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" (UID: "b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.455368 4880 scope.go:117] "RemoveContainer" containerID="ddce1a415abf9040e0ac23833658699ba84695c544be7d0c19978a697da91b13" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.517882 4880 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.517908 4880 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.517919 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.517928 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.786937 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.812513 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.845140 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:16:01 crc kubenswrapper[4880]: E1201 03:16:01.845564 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerName="glance-httpd" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.845582 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerName="glance-httpd" Dec 01 03:16:01 crc kubenswrapper[4880]: E1201 03:16:01.845613 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerName="glance-log" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.845620 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerName="glance-log" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.845793 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerName="glance-log" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.845820 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" containerName="glance-httpd" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.846761 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.850203 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.850783 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.851079 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.867374 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.923519 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.923596 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c545ae8-67b7-4daf-94f0-279840948e9d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.924485 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-config-data\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.924514 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.924571 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.924590 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wl6m\" (UniqueName: \"kubernetes.io/projected/0c545ae8-67b7-4daf-94f0-279840948e9d-kube-api-access-7wl6m\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.924614 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c545ae8-67b7-4daf-94f0-279840948e9d-logs\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:01 crc kubenswrapper[4880]: I1201 03:16:01.924696 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-scripts\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.025746 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-scripts\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.026136 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.026166 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c545ae8-67b7-4daf-94f0-279840948e9d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.026213 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-config-data\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.026230 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.026263 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.026281 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wl6m\" (UniqueName: \"kubernetes.io/projected/0c545ae8-67b7-4daf-94f0-279840948e9d-kube-api-access-7wl6m\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.026306 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c545ae8-67b7-4daf-94f0-279840948e9d-logs\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.026735 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c545ae8-67b7-4daf-94f0-279840948e9d-logs\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.031100 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0c545ae8-67b7-4daf-94f0-279840948e9d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.035012 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.037153 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-config-data\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.037615 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.038151 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.043560 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c545ae8-67b7-4daf-94f0-279840948e9d-scripts\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.050522 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wl6m\" (UniqueName: \"kubernetes.io/projected/0c545ae8-67b7-4daf-94f0-279840948e9d-kube-api-access-7wl6m\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.086272 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"0c545ae8-67b7-4daf-94f0-279840948e9d\") " pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.180229 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.442754 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bd74f03a-5371-4ae1-b044-c7cf3cacd388","Type":"ContainerStarted","Data":"dc6d61941c4a591fba193961715ccb67b0fd9a76d56f76941a334f6604d6f03e"} Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.814375 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182db9c6-4756-4acb-a228-a1fe3fe7a4dd" path="/var/lib/kubelet/pods/182db9c6-4756-4acb-a228-a1fe3fe7a4dd/volumes" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.815085 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de" path="/var/lib/kubelet/pods/b581e543-c9b8-4c94-8a6d-8c2b6ba7b9de/volumes" Dec 01 03:16:02 crc kubenswrapper[4880]: I1201 03:16:02.821581 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 03:16:03 crc kubenswrapper[4880]: I1201 03:16:03.451357 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0c545ae8-67b7-4daf-94f0-279840948e9d","Type":"ContainerStarted","Data":"3bf53790a4c7ea3d5e3fca6b6834c25ceeb0d712d2c1cdf82867de2e91045209"} Dec 01 03:16:03 crc kubenswrapper[4880]: I1201 03:16:03.453841 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bd74f03a-5371-4ae1-b044-c7cf3cacd388","Type":"ContainerStarted","Data":"b30991fec0cb2a251d65fc5fa61af60f7266d92a937647dafade79f1a56b163b"} Dec 01 03:16:04 crc kubenswrapper[4880]: I1201 03:16:04.463418 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bd74f03a-5371-4ae1-b044-c7cf3cacd388","Type":"ContainerStarted","Data":"69efb1a86a78a009185d61a3b3a79b75aed1e4ef29de46a4e47d339cba83143e"} Dec 01 03:16:04 crc kubenswrapper[4880]: I1201 03:16:04.466275 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0c545ae8-67b7-4daf-94f0-279840948e9d","Type":"ContainerStarted","Data":"baacb4fc8601c315c1b2eab403f3631d7985f1060b7e92cd9eb2b76774665dd0"} Dec 01 03:16:04 crc kubenswrapper[4880]: I1201 03:16:04.466302 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0c545ae8-67b7-4daf-94f0-279840948e9d","Type":"ContainerStarted","Data":"c0bce89dc0d45b77b69f5d1443e4c83409fdfcdbaae48c3815ce94550647f11a"} Dec 01 03:16:04 crc kubenswrapper[4880]: I1201 03:16:04.525079 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.525055828 podStartE2EDuration="4.525055828s" podCreationTimestamp="2025-12-01 03:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:04.508737265 +0000 UTC m=+1194.019991637" watchObservedRunningTime="2025-12-01 03:16:04.525055828 +0000 UTC m=+1194.036310200" Dec 01 03:16:04 crc kubenswrapper[4880]: I1201 03:16:04.539412 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.53939574 podStartE2EDuration="3.53939574s" podCreationTimestamp="2025-12-01 03:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:04.531175422 +0000 UTC m=+1194.042429794" watchObservedRunningTime="2025-12-01 03:16:04.53939574 +0000 UTC m=+1194.050650112" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.303753 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.397293 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-log-httpd\") pod \"042ca337-471d-4796-9cc7-2561bd51219e\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.397374 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wflk2\" (UniqueName: \"kubernetes.io/projected/042ca337-471d-4796-9cc7-2561bd51219e-kube-api-access-wflk2\") pod \"042ca337-471d-4796-9cc7-2561bd51219e\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.397406 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-scripts\") pod \"042ca337-471d-4796-9cc7-2561bd51219e\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.397463 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-sg-core-conf-yaml\") pod \"042ca337-471d-4796-9cc7-2561bd51219e\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.397478 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-config-data\") pod \"042ca337-471d-4796-9cc7-2561bd51219e\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.397522 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-combined-ca-bundle\") pod \"042ca337-471d-4796-9cc7-2561bd51219e\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.397638 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-run-httpd\") pod \"042ca337-471d-4796-9cc7-2561bd51219e\" (UID: \"042ca337-471d-4796-9cc7-2561bd51219e\") " Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.397663 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "042ca337-471d-4796-9cc7-2561bd51219e" (UID: "042ca337-471d-4796-9cc7-2561bd51219e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.398050 4880 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.398094 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "042ca337-471d-4796-9cc7-2561bd51219e" (UID: "042ca337-471d-4796-9cc7-2561bd51219e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.417361 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-scripts" (OuterVolumeSpecName: "scripts") pod "042ca337-471d-4796-9cc7-2561bd51219e" (UID: "042ca337-471d-4796-9cc7-2561bd51219e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.418243 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042ca337-471d-4796-9cc7-2561bd51219e-kube-api-access-wflk2" (OuterVolumeSpecName: "kube-api-access-wflk2") pod "042ca337-471d-4796-9cc7-2561bd51219e" (UID: "042ca337-471d-4796-9cc7-2561bd51219e"). InnerVolumeSpecName "kube-api-access-wflk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.474793 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "042ca337-471d-4796-9cc7-2561bd51219e" (UID: "042ca337-471d-4796-9cc7-2561bd51219e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.479022 4880 generic.go:334] "Generic (PLEG): container finished" podID="042ca337-471d-4796-9cc7-2561bd51219e" containerID="ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791" exitCode=0 Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.479082 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.479128 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerDied","Data":"ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791"} Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.479156 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"042ca337-471d-4796-9cc7-2561bd51219e","Type":"ContainerDied","Data":"bb65bc40fac0877d74f8e2ef35290b305557bd9db587b024b3eac434b0993dc1"} Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.479182 4880 scope.go:117] "RemoveContainer" containerID="55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.492366 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "042ca337-471d-4796-9cc7-2561bd51219e" (UID: "042ca337-471d-4796-9cc7-2561bd51219e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.499971 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wflk2\" (UniqueName: \"kubernetes.io/projected/042ca337-471d-4796-9cc7-2561bd51219e-kube-api-access-wflk2\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.499994 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.500003 4880 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.500011 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.500019 4880 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/042ca337-471d-4796-9cc7-2561bd51219e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.516709 4880 scope.go:117] "RemoveContainer" containerID="3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.527302 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-config-data" (OuterVolumeSpecName: "config-data") pod "042ca337-471d-4796-9cc7-2561bd51219e" (UID: "042ca337-471d-4796-9cc7-2561bd51219e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.544000 4880 scope.go:117] "RemoveContainer" containerID="f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.571910 4880 scope.go:117] "RemoveContainer" containerID="ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.590681 4880 scope.go:117] "RemoveContainer" containerID="55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3" Dec 01 03:16:05 crc kubenswrapper[4880]: E1201 03:16:05.591128 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3\": container with ID starting with 55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3 not found: ID does not exist" containerID="55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.591157 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3"} err="failed to get container status \"55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3\": rpc error: code = NotFound desc = could not find container \"55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3\": container with ID starting with 55d6e8d706a2d93ef60762e1e19ed1a3f18ec5217b36e97f382146543a3041b3 not found: ID does not exist" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.591177 4880 scope.go:117] "RemoveContainer" containerID="3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42" Dec 01 03:16:05 crc kubenswrapper[4880]: E1201 03:16:05.591568 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42\": container with ID starting with 3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42 not found: ID does not exist" containerID="3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.591590 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42"} err="failed to get container status \"3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42\": rpc error: code = NotFound desc = could not find container \"3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42\": container with ID starting with 3902e29d5760b768388a1a11710bfbbed379afeac8e79eaac5612f7ea6361c42 not found: ID does not exist" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.591602 4880 scope.go:117] "RemoveContainer" containerID="f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b" Dec 01 03:16:05 crc kubenswrapper[4880]: E1201 03:16:05.592062 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b\": container with ID starting with f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b not found: ID does not exist" containerID="f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.592107 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b"} err="failed to get container status \"f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b\": rpc error: code = NotFound desc = could not find container \"f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b\": container with ID starting with f17222d88ed383d6d0f3d5e68a3c62e34040e96e09872f3bce96367446fdd35b not found: ID does not exist" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.592135 4880 scope.go:117] "RemoveContainer" containerID="ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791" Dec 01 03:16:05 crc kubenswrapper[4880]: E1201 03:16:05.592457 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791\": container with ID starting with ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791 not found: ID does not exist" containerID="ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.592484 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791"} err="failed to get container status \"ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791\": rpc error: code = NotFound desc = could not find container \"ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791\": container with ID starting with ce69dc4a7009ef85112eca7c5439848f99e1241c66ba50032c06b7401aa05791 not found: ID does not exist" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.602951 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/042ca337-471d-4796-9cc7-2561bd51219e-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.808020 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.819942 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.835586 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:05 crc kubenswrapper[4880]: E1201 03:16:05.836170 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="proxy-httpd" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.836196 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="proxy-httpd" Dec 01 03:16:05 crc kubenswrapper[4880]: E1201 03:16:05.836215 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="sg-core" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.836225 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="sg-core" Dec 01 03:16:05 crc kubenswrapper[4880]: E1201 03:16:05.836251 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="ceilometer-notification-agent" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.836259 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="ceilometer-notification-agent" Dec 01 03:16:05 crc kubenswrapper[4880]: E1201 03:16:05.836290 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="ceilometer-central-agent" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.836299 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="ceilometer-central-agent" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.836548 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="sg-core" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.836586 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="ceilometer-notification-agent" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.836603 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="ceilometer-central-agent" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.836616 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="042ca337-471d-4796-9cc7-2561bd51219e" containerName="proxy-httpd" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.840370 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.844076 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.844354 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.844824 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.908266 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-scripts\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.908491 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-run-httpd\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.908576 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgcdn\" (UniqueName: \"kubernetes.io/projected/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-kube-api-access-lgcdn\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.908683 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.908755 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.908894 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-config-data\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:05 crc kubenswrapper[4880]: I1201 03:16:05.909006 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-log-httpd\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.010208 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-config-data\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.010280 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-log-httpd\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.010348 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-scripts\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.010371 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-run-httpd\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.010394 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgcdn\" (UniqueName: \"kubernetes.io/projected/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-kube-api-access-lgcdn\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.010418 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.010433 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.011034 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-log-httpd\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.011556 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-run-httpd\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.016564 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.016936 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-scripts\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.017376 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-config-data\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.022047 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.028818 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgcdn\" (UniqueName: \"kubernetes.io/projected/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-kube-api-access-lgcdn\") pod \"ceilometer-0\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.169153 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:16:06 crc kubenswrapper[4880]: W1201 03:16:06.642524 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bd14ca2_35c1_4a1a_bb66_1dcb9290a772.slice/crio-73d6d222dbcb8e2961938f590d9471c635e0f0335fd3be0f21fd56f589d08aa1 WatchSource:0}: Error finding container 73d6d222dbcb8e2961938f590d9471c635e0f0335fd3be0f21fd56f589d08aa1: Status 404 returned error can't find the container with id 73d6d222dbcb8e2961938f590d9471c635e0f0335fd3be0f21fd56f589d08aa1 Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.645142 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:06 crc kubenswrapper[4880]: I1201 03:16:06.796073 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042ca337-471d-4796-9cc7-2561bd51219e" path="/var/lib/kubelet/pods/042ca337-471d-4796-9cc7-2561bd51219e/volumes" Dec 01 03:16:07 crc kubenswrapper[4880]: I1201 03:16:07.498669 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerStarted","Data":"a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044"} Dec 01 03:16:07 crc kubenswrapper[4880]: I1201 03:16:07.499037 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerStarted","Data":"3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4"} Dec 01 03:16:07 crc kubenswrapper[4880]: I1201 03:16:07.499052 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerStarted","Data":"73d6d222dbcb8e2961938f590d9471c635e0f0335fd3be0f21fd56f589d08aa1"} Dec 01 03:16:08 crc kubenswrapper[4880]: I1201 03:16:08.511235 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerStarted","Data":"babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea"} Dec 01 03:16:09 crc kubenswrapper[4880]: I1201 03:16:09.036600 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:09 crc kubenswrapper[4880]: I1201 03:16:09.522776 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerStarted","Data":"1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055"} Dec 01 03:16:09 crc kubenswrapper[4880]: I1201 03:16:09.522937 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="ceilometer-central-agent" containerID="cri-o://3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4" gracePeriod=30 Dec 01 03:16:09 crc kubenswrapper[4880]: I1201 03:16:09.523173 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="proxy-httpd" containerID="cri-o://1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055" gracePeriod=30 Dec 01 03:16:09 crc kubenswrapper[4880]: I1201 03:16:09.523225 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="ceilometer-notification-agent" containerID="cri-o://a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044" gracePeriod=30 Dec 01 03:16:09 crc kubenswrapper[4880]: I1201 03:16:09.523358 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 03:16:09 crc kubenswrapper[4880]: I1201 03:16:09.523396 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="sg-core" containerID="cri-o://babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea" gracePeriod=30 Dec 01 03:16:09 crc kubenswrapper[4880]: I1201 03:16:09.551201 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.92921815 podStartE2EDuration="4.551181595s" podCreationTimestamp="2025-12-01 03:16:05 +0000 UTC" firstStartedPulling="2025-12-01 03:16:06.64425968 +0000 UTC m=+1196.155514042" lastFinishedPulling="2025-12-01 03:16:09.266223125 +0000 UTC m=+1198.777477487" observedRunningTime="2025-12-01 03:16:09.546555658 +0000 UTC m=+1199.057810040" watchObservedRunningTime="2025-12-01 03:16:09.551181595 +0000 UTC m=+1199.062435967" Dec 01 03:16:10 crc kubenswrapper[4880]: I1201 03:16:10.535906 4880 generic.go:334] "Generic (PLEG): container finished" podID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerID="babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea" exitCode=2 Dec 01 03:16:10 crc kubenswrapper[4880]: I1201 03:16:10.535950 4880 generic.go:334] "Generic (PLEG): container finished" podID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerID="a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044" exitCode=0 Dec 01 03:16:10 crc kubenswrapper[4880]: I1201 03:16:10.535974 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerDied","Data":"babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea"} Dec 01 03:16:10 crc kubenswrapper[4880]: I1201 03:16:10.536003 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerDied","Data":"a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044"} Dec 01 03:16:11 crc kubenswrapper[4880]: I1201 03:16:11.166988 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:11 crc kubenswrapper[4880]: I1201 03:16:11.167256 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:11 crc kubenswrapper[4880]: I1201 03:16:11.207251 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:11 crc kubenswrapper[4880]: I1201 03:16:11.221257 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:11 crc kubenswrapper[4880]: I1201 03:16:11.544789 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:11 crc kubenswrapper[4880]: I1201 03:16:11.544832 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.181036 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.181382 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.243463 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.247937 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.382563 4880 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2b6fa45f-b959-4fae-958d-06f32307b7d7"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2b6fa45f-b959-4fae-958d-06f32307b7d7] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2b6fa45f_b959_4fae_958d_06f32307b7d7.slice" Dec 01 03:16:12 crc kubenswrapper[4880]: E1201 03:16:12.382631 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod2b6fa45f-b959-4fae-958d-06f32307b7d7] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod2b6fa45f-b959-4fae-958d-06f32307b7d7] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2b6fa45f_b959_4fae_958d_06f32307b7d7.slice" pod="openstack/heat-api-689598d56f-hm2sf" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.556277 4880 generic.go:334] "Generic (PLEG): container finished" podID="b2e7d851-5607-477c-a499-bee4568e24c2" containerID="4dc1f945c882a4d7c06c75fa753538910c0ddfaf0e27da2fa29be03cd5691e2b" exitCode=0 Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.556341 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nj9mn" event={"ID":"b2e7d851-5607-477c-a499-bee4568e24c2","Type":"ContainerDied","Data":"4dc1f945c882a4d7c06c75fa753538910c0ddfaf0e27da2fa29be03cd5691e2b"} Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.556367 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-689598d56f-hm2sf" Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.557344 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.557384 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.601079 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-689598d56f-hm2sf"] Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.607778 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-689598d56f-hm2sf"] Dec 01 03:16:12 crc kubenswrapper[4880]: I1201 03:16:12.839592 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6fa45f-b959-4fae-958d-06f32307b7d7" path="/var/lib/kubelet/pods/2b6fa45f-b959-4fae-958d-06f32307b7d7/volumes" Dec 01 03:16:13 crc kubenswrapper[4880]: I1201 03:16:13.945503 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.016042 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.016159 4880 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.039565 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-combined-ca-bundle\") pod \"b2e7d851-5607-477c-a499-bee4568e24c2\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.039682 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-config-data\") pod \"b2e7d851-5607-477c-a499-bee4568e24c2\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.039705 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-scripts\") pod \"b2e7d851-5607-477c-a499-bee4568e24c2\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.039854 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slhjj\" (UniqueName: \"kubernetes.io/projected/b2e7d851-5607-477c-a499-bee4568e24c2-kube-api-access-slhjj\") pod \"b2e7d851-5607-477c-a499-bee4568e24c2\" (UID: \"b2e7d851-5607-477c-a499-bee4568e24c2\") " Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.053859 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e7d851-5607-477c-a499-bee4568e24c2-kube-api-access-slhjj" (OuterVolumeSpecName: "kube-api-access-slhjj") pod "b2e7d851-5607-477c-a499-bee4568e24c2" (UID: "b2e7d851-5607-477c-a499-bee4568e24c2"). InnerVolumeSpecName "kube-api-access-slhjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.059684 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-scripts" (OuterVolumeSpecName: "scripts") pod "b2e7d851-5607-477c-a499-bee4568e24c2" (UID: "b2e7d851-5607-477c-a499-bee4568e24c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.082486 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-config-data" (OuterVolumeSpecName: "config-data") pod "b2e7d851-5607-477c-a499-bee4568e24c2" (UID: "b2e7d851-5607-477c-a499-bee4568e24c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.110228 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2e7d851-5607-477c-a499-bee4568e24c2" (UID: "b2e7d851-5607-477c-a499-bee4568e24c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.141603 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.141631 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.141641 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slhjj\" (UniqueName: \"kubernetes.io/projected/b2e7d851-5607-477c-a499-bee4568e24c2-kube-api-access-slhjj\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.142943 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e7d851-5607-477c-a499-bee4568e24c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.202816 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.573722 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nj9mn" event={"ID":"b2e7d851-5607-477c-a499-bee4568e24c2","Type":"ContainerDied","Data":"85b855207a68c04bcab0ec15f928071ad0497af7285a551a3ebc54793ebbb34a"} Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.574046 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85b855207a68c04bcab0ec15f928071ad0497af7285a551a3ebc54793ebbb34a" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.573736 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nj9mn" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.687932 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 03:16:14 crc kubenswrapper[4880]: E1201 03:16:14.688337 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e7d851-5607-477c-a499-bee4568e24c2" containerName="nova-cell0-conductor-db-sync" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.688353 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e7d851-5607-477c-a499-bee4568e24c2" containerName="nova-cell0-conductor-db-sync" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.688565 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e7d851-5607-477c-a499-bee4568e24c2" containerName="nova-cell0-conductor-db-sync" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.689184 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.696715 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.697364 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.697736 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mcw5c" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.765047 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae3e0ff-5293-4c90-b9a1-2401345d73af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"eae3e0ff-5293-4c90-b9a1-2401345d73af\") " pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.765132 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-749tz\" (UniqueName: \"kubernetes.io/projected/eae3e0ff-5293-4c90-b9a1-2401345d73af-kube-api-access-749tz\") pod \"nova-cell0-conductor-0\" (UID: \"eae3e0ff-5293-4c90-b9a1-2401345d73af\") " pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.765155 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae3e0ff-5293-4c90-b9a1-2401345d73af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"eae3e0ff-5293-4c90-b9a1-2401345d73af\") " pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.866677 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae3e0ff-5293-4c90-b9a1-2401345d73af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"eae3e0ff-5293-4c90-b9a1-2401345d73af\") " pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.866799 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-749tz\" (UniqueName: \"kubernetes.io/projected/eae3e0ff-5293-4c90-b9a1-2401345d73af-kube-api-access-749tz\") pod \"nova-cell0-conductor-0\" (UID: \"eae3e0ff-5293-4c90-b9a1-2401345d73af\") " pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.866824 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae3e0ff-5293-4c90-b9a1-2401345d73af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"eae3e0ff-5293-4c90-b9a1-2401345d73af\") " pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.873732 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae3e0ff-5293-4c90-b9a1-2401345d73af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"eae3e0ff-5293-4c90-b9a1-2401345d73af\") " pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.901575 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae3e0ff-5293-4c90-b9a1-2401345d73af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"eae3e0ff-5293-4c90-b9a1-2401345d73af\") " pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:14 crc kubenswrapper[4880]: I1201 03:16:14.906408 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-749tz\" (UniqueName: \"kubernetes.io/projected/eae3e0ff-5293-4c90-b9a1-2401345d73af-kube-api-access-749tz\") pod \"nova-cell0-conductor-0\" (UID: \"eae3e0ff-5293-4c90-b9a1-2401345d73af\") " pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:15 crc kubenswrapper[4880]: I1201 03:16:15.015752 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:15 crc kubenswrapper[4880]: I1201 03:16:15.318664 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 03:16:15 crc kubenswrapper[4880]: I1201 03:16:15.319268 4880 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 03:16:15 crc kubenswrapper[4880]: I1201 03:16:15.514863 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 03:16:15 crc kubenswrapper[4880]: I1201 03:16:15.516460 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 03:16:15 crc kubenswrapper[4880]: I1201 03:16:15.615147 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"eae3e0ff-5293-4c90-b9a1-2401345d73af","Type":"ContainerStarted","Data":"10bf657d629fa122fa01f57457eaf5a5235c9bbde26f1c016d47241bd2df7b45"} Dec 01 03:16:17 crc kubenswrapper[4880]: I1201 03:16:17.632000 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"eae3e0ff-5293-4c90-b9a1-2401345d73af","Type":"ContainerStarted","Data":"eb6bb47240b7f969f6d87720fb85ebbdd81b6093c99c672a61bca9e84eb3caec"} Dec 01 03:16:17 crc kubenswrapper[4880]: I1201 03:16:17.633914 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:17 crc kubenswrapper[4880]: I1201 03:16:17.655443 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.655420044 podStartE2EDuration="3.655420044s" podCreationTimestamp="2025-12-01 03:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:17.65250726 +0000 UTC m=+1207.163761682" watchObservedRunningTime="2025-12-01 03:16:17.655420044 +0000 UTC m=+1207.166674456" Dec 01 03:16:19 crc kubenswrapper[4880]: I1201 03:16:19.668810 4880 generic.go:334] "Generic (PLEG): container finished" podID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerID="3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4" exitCode=0 Dec 01 03:16:19 crc kubenswrapper[4880]: I1201 03:16:19.668926 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerDied","Data":"3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4"} Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.064246 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.719237 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-sn8nz"] Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.720342 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.726526 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.726760 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.737630 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sn8nz"] Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.813218 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-scripts\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.813285 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.813397 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-config-data\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.813419 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sh59\" (UniqueName: \"kubernetes.io/projected/f0910838-ee7d-4d85-973d-4d34d331e684-kube-api-access-4sh59\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.901939 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.903431 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.907189 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.914448 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.914634 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-config-data\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.914659 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sh59\" (UniqueName: \"kubernetes.io/projected/f0910838-ee7d-4d85-973d-4d34d331e684-kube-api-access-4sh59\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.914712 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-scripts\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.951664 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.982850 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-config-data\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.983366 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.997320 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-scripts\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:25 crc kubenswrapper[4880]: I1201 03:16:25.998514 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.003394 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sh59\" (UniqueName: \"kubernetes.io/projected/f0910838-ee7d-4d85-973d-4d34d331e684-kube-api-access-4sh59\") pod \"nova-cell0-cell-mapping-sn8nz\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.018920 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmb5\" (UniqueName: \"kubernetes.io/projected/83487d74-fdd0-493c-8b33-5de13ec1ed53-kube-api-access-xdmb5\") pod \"nova-cell1-novncproxy-0\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.018977 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.019040 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.022422 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.022650 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.030265 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.042307 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.122423 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.122509 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-config-data\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.122528 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-logs\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.122556 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.122593 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwvx9\" (UniqueName: \"kubernetes.io/projected/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-kube-api-access-mwvx9\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.122632 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.122697 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmb5\" (UniqueName: \"kubernetes.io/projected/83487d74-fdd0-493c-8b33-5de13ec1ed53-kube-api-access-xdmb5\") pod \"nova-cell1-novncproxy-0\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.126019 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.128157 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.140411 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.141722 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.147352 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.158744 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.159832 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmb5\" (UniqueName: \"kubernetes.io/projected/83487d74-fdd0-493c-8b33-5de13ec1ed53-kube-api-access-xdmb5\") pod \"nova-cell1-novncproxy-0\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.238636 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.239219 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.239332 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-config-data\") pod \"nova-scheduler-0\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.239354 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-config-data\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.239373 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-logs\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.239414 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwvx9\" (UniqueName: \"kubernetes.io/projected/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-kube-api-access-mwvx9\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.239431 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8j2\" (UniqueName: \"kubernetes.io/projected/59afeb64-c66b-4908-b0d7-1099ec2dd375-kube-api-access-rh8j2\") pod \"nova-scheduler-0\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.243534 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-logs\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.246118 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-config-data\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.248448 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.285771 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.289006 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.292333 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.298967 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwvx9\" (UniqueName: \"kubernetes.io/projected/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-kube-api-access-mwvx9\") pod \"nova-api-0\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.303179 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.352585 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.353652 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh8j2\" (UniqueName: \"kubernetes.io/projected/59afeb64-c66b-4908-b0d7-1099ec2dd375-kube-api-access-rh8j2\") pod \"nova-scheduler-0\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.353782 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.355385 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-config-data\") pod \"nova-scheduler-0\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.385531 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.386115 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-config-data\") pod \"nova-scheduler-0\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.439392 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh8j2\" (UniqueName: \"kubernetes.io/projected/59afeb64-c66b-4908-b0d7-1099ec2dd375-kube-api-access-rh8j2\") pod \"nova-scheduler-0\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.446187 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.460687 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f5698bdfc-dct9n"] Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.479998 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.497274 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f5698bdfc-dct9n"] Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.498904 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-config\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.498961 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-sb\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.498998 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-config-data\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.499026 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6db\" (UniqueName: \"kubernetes.io/projected/418e8113-9ac3-470d-81be-462508201cf8-kube-api-access-gm6db\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.499352 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8qm\" (UniqueName: \"kubernetes.io/projected/7c59659f-7a35-4df4-8816-4c48a175e7a4-kube-api-access-lm8qm\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.499394 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.499486 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-swift-storage-0\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.499722 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-svc\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.499820 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418e8113-9ac3-470d-81be-462508201cf8-logs\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.499854 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-nb\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.532240 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.603820 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-swift-storage-0\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.603942 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-svc\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.603982 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418e8113-9ac3-470d-81be-462508201cf8-logs\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.604006 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-nb\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.604071 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-config\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.604089 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-sb\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.604106 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-config-data\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.604125 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6db\" (UniqueName: \"kubernetes.io/projected/418e8113-9ac3-470d-81be-462508201cf8-kube-api-access-gm6db\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.604148 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm8qm\" (UniqueName: \"kubernetes.io/projected/7c59659f-7a35-4df4-8816-4c48a175e7a4-kube-api-access-lm8qm\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.604170 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.604649 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-swift-storage-0\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.605467 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418e8113-9ac3-470d-81be-462508201cf8-logs\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.605922 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-config\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.608669 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-svc\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.608759 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-sb\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.609472 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-nb\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.616578 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-config-data\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.620974 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.621618 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm8qm\" (UniqueName: \"kubernetes.io/projected/7c59659f-7a35-4df4-8816-4c48a175e7a4-kube-api-access-lm8qm\") pod \"dnsmasq-dns-6f5698bdfc-dct9n\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.623374 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6db\" (UniqueName: \"kubernetes.io/projected/418e8113-9ac3-470d-81be-462508201cf8-kube-api-access-gm6db\") pod \"nova-metadata-0\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.803844 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sn8nz"] Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.829827 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:26 crc kubenswrapper[4880]: I1201 03:16:26.925955 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.005154 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.209619 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.231925 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.494423 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f5698bdfc-dct9n"] Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.583904 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.764528 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"418e8113-9ac3-470d-81be-462508201cf8","Type":"ContainerStarted","Data":"337d4288439d11150272aaad9474592800c581b4ff9e92efa5d42b4872c445a5"} Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.766240 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59afeb64-c66b-4908-b0d7-1099ec2dd375","Type":"ContainerStarted","Data":"a85e5f12069507374a4701aa05c698de4aa625737c4be3a07d50806d026177a3"} Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.793977 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83487d74-fdd0-493c-8b33-5de13ec1ed53","Type":"ContainerStarted","Data":"0d104266e9a45c15a227f41616e0a38a2d2fbe62165bdddd7116f9b0dde7a6ba"} Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.797412 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sn8nz" event={"ID":"f0910838-ee7d-4d85-973d-4d34d331e684","Type":"ContainerStarted","Data":"3af157b9be6e70652812f6f608cdd46d2fdf00287bbca11c1d4f7e89d54cd8c3"} Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.797453 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sn8nz" event={"ID":"f0910838-ee7d-4d85-973d-4d34d331e684","Type":"ContainerStarted","Data":"dea82aa6aab23129c0497c57fd7b189a5396d65ead426577f46b47d283c9923b"} Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.801494 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d61218a6-8e9b-4255-9fa7-5212e1ef30b5","Type":"ContainerStarted","Data":"68c827d605ce27db56434478dad7cb81d55bd639f3cc0182a6c29bf64f2c996c"} Dec 01 03:16:27 crc kubenswrapper[4880]: I1201 03:16:27.810338 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" event={"ID":"7c59659f-7a35-4df4-8816-4c48a175e7a4","Type":"ContainerStarted","Data":"66f9dfd0ac2494775009862433d5e82992a5171bd42113eaec71e784e1bf5fa4"} Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.109372 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-sn8nz" podStartSLOduration=3.109353068 podStartE2EDuration="3.109353068s" podCreationTimestamp="2025-12-01 03:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:27.818046138 +0000 UTC m=+1217.329300520" watchObservedRunningTime="2025-12-01 03:16:28.109353068 +0000 UTC m=+1217.620607440" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.111302 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-szfn6"] Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.112484 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.116213 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.116245 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.156097 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-szfn6"] Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.239172 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-config-data\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.240305 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.240337 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbwx\" (UniqueName: \"kubernetes.io/projected/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-kube-api-access-wjbwx\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.240378 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-scripts\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.342382 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-config-data\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.342471 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.342495 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjbwx\" (UniqueName: \"kubernetes.io/projected/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-kube-api-access-wjbwx\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.342530 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-scripts\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.351331 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.352683 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-config-data\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.360282 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-scripts\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.375925 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjbwx\" (UniqueName: \"kubernetes.io/projected/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-kube-api-access-wjbwx\") pod \"nova-cell1-conductor-db-sync-szfn6\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.497996 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.853840 4880 generic.go:334] "Generic (PLEG): container finished" podID="7c59659f-7a35-4df4-8816-4c48a175e7a4" containerID="1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b" exitCode=0 Dec 01 03:16:28 crc kubenswrapper[4880]: I1201 03:16:28.854716 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" event={"ID":"7c59659f-7a35-4df4-8816-4c48a175e7a4","Type":"ContainerDied","Data":"1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b"} Dec 01 03:16:30 crc kubenswrapper[4880]: I1201 03:16:30.254444 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:30 crc kubenswrapper[4880]: I1201 03:16:30.263966 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 03:16:31 crc kubenswrapper[4880]: I1201 03:16:31.240906 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-szfn6"] Dec 01 03:16:31 crc kubenswrapper[4880]: I1201 03:16:31.927139 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-szfn6" event={"ID":"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72","Type":"ContainerStarted","Data":"b5ec7f12809be1c1adc7ec411288618cbebd970e10e05473f10057002a763eb1"} Dec 01 03:16:31 crc kubenswrapper[4880]: I1201 03:16:31.927426 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-szfn6" event={"ID":"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72","Type":"ContainerStarted","Data":"ce21ae9d1a9c6310e7db15b8d4c0d980bcaffe96833b50646f00eb350aaee67b"} Dec 01 03:16:31 crc kubenswrapper[4880]: I1201 03:16:31.937377 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59afeb64-c66b-4908-b0d7-1099ec2dd375","Type":"ContainerStarted","Data":"dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c"} Dec 01 03:16:31 crc kubenswrapper[4880]: I1201 03:16:31.949553 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-szfn6" podStartSLOduration=3.949540673 podStartE2EDuration="3.949540673s" podCreationTimestamp="2025-12-01 03:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:31.943748907 +0000 UTC m=+1221.455003269" watchObservedRunningTime="2025-12-01 03:16:31.949540673 +0000 UTC m=+1221.460795045" Dec 01 03:16:31 crc kubenswrapper[4880]: I1201 03:16:31.973823 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.51271386 podStartE2EDuration="5.973805736s" podCreationTimestamp="2025-12-01 03:16:26 +0000 UTC" firstStartedPulling="2025-12-01 03:16:27.276019434 +0000 UTC m=+1216.787273796" lastFinishedPulling="2025-12-01 03:16:30.7371113 +0000 UTC m=+1220.248365672" observedRunningTime="2025-12-01 03:16:31.962349777 +0000 UTC m=+1221.473604149" watchObservedRunningTime="2025-12-01 03:16:31.973805736 +0000 UTC m=+1221.485060108" Dec 01 03:16:31 crc kubenswrapper[4880]: I1201 03:16:31.985861 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" event={"ID":"7c59659f-7a35-4df4-8816-4c48a175e7a4","Type":"ContainerStarted","Data":"c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10"} Dec 01 03:16:31 crc kubenswrapper[4880]: I1201 03:16:31.986136 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.006015 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" podStartSLOduration=6.00599652 podStartE2EDuration="6.00599652s" podCreationTimestamp="2025-12-01 03:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:32.005005204 +0000 UTC m=+1221.516259586" watchObservedRunningTime="2025-12-01 03:16:32.00599652 +0000 UTC m=+1221.517250892" Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.008181 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="83487d74-fdd0-493c-8b33-5de13ec1ed53" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff" gracePeriod=30 Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.007946 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83487d74-fdd0-493c-8b33-5de13ec1ed53","Type":"ContainerStarted","Data":"042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff"} Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.021187 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d61218a6-8e9b-4255-9fa7-5212e1ef30b5","Type":"ContainerStarted","Data":"d410b2404658c69bf4a4a3ec97fa51807261e5bc600a6be148d1b4c88e93e6ca"} Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.021228 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d61218a6-8e9b-4255-9fa7-5212e1ef30b5","Type":"ContainerStarted","Data":"2de4aea1c9158835b392bcd2462f5d5f849e27590ff7f6c5a176f1ea0e19ba0f"} Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.054085 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"418e8113-9ac3-470d-81be-462508201cf8","Type":"ContainerStarted","Data":"dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd"} Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.054131 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"418e8113-9ac3-470d-81be-462508201cf8","Type":"ContainerStarted","Data":"61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70"} Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.054384 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="418e8113-9ac3-470d-81be-462508201cf8" containerName="nova-metadata-log" containerID="cri-o://61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70" gracePeriod=30 Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.054654 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="418e8113-9ac3-470d-81be-462508201cf8" containerName="nova-metadata-metadata" containerID="cri-o://dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd" gracePeriod=30 Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.081643 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.364213087 podStartE2EDuration="7.08162933s" podCreationTimestamp="2025-12-01 03:16:25 +0000 UTC" firstStartedPulling="2025-12-01 03:16:27.019053151 +0000 UTC m=+1216.530307523" lastFinishedPulling="2025-12-01 03:16:30.736469404 +0000 UTC m=+1220.247723766" observedRunningTime="2025-12-01 03:16:32.027458892 +0000 UTC m=+1221.538713264" watchObservedRunningTime="2025-12-01 03:16:32.08162933 +0000 UTC m=+1221.592883702" Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.128049 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.638702023 podStartE2EDuration="7.128034193s" podCreationTimestamp="2025-12-01 03:16:25 +0000 UTC" firstStartedPulling="2025-12-01 03:16:27.245979415 +0000 UTC m=+1216.757233787" lastFinishedPulling="2025-12-01 03:16:30.735311585 +0000 UTC m=+1220.246565957" observedRunningTime="2025-12-01 03:16:32.051658133 +0000 UTC m=+1221.562912515" watchObservedRunningTime="2025-12-01 03:16:32.128034193 +0000 UTC m=+1221.639288565" Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.162144 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.041958312 podStartE2EDuration="6.162118734s" podCreationTimestamp="2025-12-01 03:16:26 +0000 UTC" firstStartedPulling="2025-12-01 03:16:27.61783478 +0000 UTC m=+1217.129089142" lastFinishedPulling="2025-12-01 03:16:30.737995192 +0000 UTC m=+1220.249249564" observedRunningTime="2025-12-01 03:16:32.093249604 +0000 UTC m=+1221.604503986" watchObservedRunningTime="2025-12-01 03:16:32.162118734 +0000 UTC m=+1221.673373106" Dec 01 03:16:32 crc kubenswrapper[4880]: I1201 03:16:32.990761 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.067953 4880 generic.go:334] "Generic (PLEG): container finished" podID="418e8113-9ac3-470d-81be-462508201cf8" containerID="dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd" exitCode=0 Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.067984 4880 generic.go:334] "Generic (PLEG): container finished" podID="418e8113-9ac3-470d-81be-462508201cf8" containerID="61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70" exitCode=143 Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.068773 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.069320 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"418e8113-9ac3-470d-81be-462508201cf8","Type":"ContainerDied","Data":"dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd"} Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.069348 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"418e8113-9ac3-470d-81be-462508201cf8","Type":"ContainerDied","Data":"61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70"} Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.069360 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"418e8113-9ac3-470d-81be-462508201cf8","Type":"ContainerDied","Data":"337d4288439d11150272aaad9474592800c581b4ff9e92efa5d42b4872c445a5"} Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.069404 4880 scope.go:117] "RemoveContainer" containerID="dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.099212 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-config-data\") pod \"418e8113-9ac3-470d-81be-462508201cf8\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.099357 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418e8113-9ac3-470d-81be-462508201cf8-logs\") pod \"418e8113-9ac3-470d-81be-462508201cf8\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.099399 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm6db\" (UniqueName: \"kubernetes.io/projected/418e8113-9ac3-470d-81be-462508201cf8-kube-api-access-gm6db\") pod \"418e8113-9ac3-470d-81be-462508201cf8\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.099415 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-combined-ca-bundle\") pod \"418e8113-9ac3-470d-81be-462508201cf8\" (UID: \"418e8113-9ac3-470d-81be-462508201cf8\") " Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.100628 4880 scope.go:117] "RemoveContainer" containerID="61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.101007 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418e8113-9ac3-470d-81be-462508201cf8-logs" (OuterVolumeSpecName: "logs") pod "418e8113-9ac3-470d-81be-462508201cf8" (UID: "418e8113-9ac3-470d-81be-462508201cf8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.114026 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418e8113-9ac3-470d-81be-462508201cf8-kube-api-access-gm6db" (OuterVolumeSpecName: "kube-api-access-gm6db") pod "418e8113-9ac3-470d-81be-462508201cf8" (UID: "418e8113-9ac3-470d-81be-462508201cf8"). InnerVolumeSpecName "kube-api-access-gm6db". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.130738 4880 scope.go:117] "RemoveContainer" containerID="dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd" Dec 01 03:16:33 crc kubenswrapper[4880]: E1201 03:16:33.131450 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd\": container with ID starting with dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd not found: ID does not exist" containerID="dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.131515 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd"} err="failed to get container status \"dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd\": rpc error: code = NotFound desc = could not find container \"dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd\": container with ID starting with dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd not found: ID does not exist" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.131540 4880 scope.go:117] "RemoveContainer" containerID="61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70" Dec 01 03:16:33 crc kubenswrapper[4880]: E1201 03:16:33.132962 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70\": container with ID starting with 61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70 not found: ID does not exist" containerID="61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.132992 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70"} err="failed to get container status \"61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70\": rpc error: code = NotFound desc = could not find container \"61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70\": container with ID starting with 61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70 not found: ID does not exist" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.133006 4880 scope.go:117] "RemoveContainer" containerID="dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.136958 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd"} err="failed to get container status \"dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd\": rpc error: code = NotFound desc = could not find container \"dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd\": container with ID starting with dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd not found: ID does not exist" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.136986 4880 scope.go:117] "RemoveContainer" containerID="61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.139701 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70"} err="failed to get container status \"61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70\": rpc error: code = NotFound desc = could not find container \"61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70\": container with ID starting with 61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70 not found: ID does not exist" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.165967 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "418e8113-9ac3-470d-81be-462508201cf8" (UID: "418e8113-9ac3-470d-81be-462508201cf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.188069 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-config-data" (OuterVolumeSpecName: "config-data") pod "418e8113-9ac3-470d-81be-462508201cf8" (UID: "418e8113-9ac3-470d-81be-462508201cf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.201212 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.201244 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/418e8113-9ac3-470d-81be-462508201cf8-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.201253 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm6db\" (UniqueName: \"kubernetes.io/projected/418e8113-9ac3-470d-81be-462508201cf8-kube-api-access-gm6db\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.201263 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418e8113-9ac3-470d-81be-462508201cf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.406415 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.414445 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.431518 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:33 crc kubenswrapper[4880]: E1201 03:16:33.431987 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418e8113-9ac3-470d-81be-462508201cf8" containerName="nova-metadata-log" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.432013 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="418e8113-9ac3-470d-81be-462508201cf8" containerName="nova-metadata-log" Dec 01 03:16:33 crc kubenswrapper[4880]: E1201 03:16:33.432067 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418e8113-9ac3-470d-81be-462508201cf8" containerName="nova-metadata-metadata" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.432077 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="418e8113-9ac3-470d-81be-462508201cf8" containerName="nova-metadata-metadata" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.432309 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="418e8113-9ac3-470d-81be-462508201cf8" containerName="nova-metadata-log" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.432342 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="418e8113-9ac3-470d-81be-462508201cf8" containerName="nova-metadata-metadata" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.433598 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.436715 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.437660 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.453291 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.607539 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-config-data\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.607702 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceebad3-8008-4756-a34d-ff98489fe8f8-logs\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.607790 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.607826 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9qh\" (UniqueName: \"kubernetes.io/projected/3ceebad3-8008-4756-a34d-ff98489fe8f8-kube-api-access-6x9qh\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.607849 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.709401 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.709647 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x9qh\" (UniqueName: \"kubernetes.io/projected/3ceebad3-8008-4756-a34d-ff98489fe8f8-kube-api-access-6x9qh\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.709668 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.709710 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-config-data\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.709802 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceebad3-8008-4756-a34d-ff98489fe8f8-logs\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.710254 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceebad3-8008-4756-a34d-ff98489fe8f8-logs\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.714226 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.715172 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-config-data\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.718449 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.726725 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x9qh\" (UniqueName: \"kubernetes.io/projected/3ceebad3-8008-4756-a34d-ff98489fe8f8-kube-api-access-6x9qh\") pod \"nova-metadata-0\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " pod="openstack/nova-metadata-0" Dec 01 03:16:33 crc kubenswrapper[4880]: I1201 03:16:33.748266 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:34 crc kubenswrapper[4880]: I1201 03:16:34.207080 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:34 crc kubenswrapper[4880]: I1201 03:16:34.796406 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418e8113-9ac3-470d-81be-462508201cf8" path="/var/lib/kubelet/pods/418e8113-9ac3-470d-81be-462508201cf8/volumes" Dec 01 03:16:35 crc kubenswrapper[4880]: I1201 03:16:35.095388 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceebad3-8008-4756-a34d-ff98489fe8f8","Type":"ContainerStarted","Data":"879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b"} Dec 01 03:16:35 crc kubenswrapper[4880]: I1201 03:16:35.095430 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceebad3-8008-4756-a34d-ff98489fe8f8","Type":"ContainerStarted","Data":"3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038"} Dec 01 03:16:35 crc kubenswrapper[4880]: I1201 03:16:35.095441 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceebad3-8008-4756-a34d-ff98489fe8f8","Type":"ContainerStarted","Data":"d7948bad8c7017eb20aa5ad29c00ccc4807e10923d0bbf7f3c07fa1a089f0f81"} Dec 01 03:16:35 crc kubenswrapper[4880]: I1201 03:16:35.123095 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.123077744 podStartE2EDuration="2.123077744s" podCreationTimestamp="2025-12-01 03:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:35.11420341 +0000 UTC m=+1224.625457822" watchObservedRunningTime="2025-12-01 03:16:35.123077744 +0000 UTC m=+1224.634332106" Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.176233 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.352831 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.449510 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.449563 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.534115 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.534429 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.618841 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.832111 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.925464 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86459544c9-nrq5w"] Dec 01 03:16:36 crc kubenswrapper[4880]: I1201 03:16:36.925981 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" podUID="ba0767c5-9152-431e-b924-05ccd6875e08" containerName="dnsmasq-dns" containerID="cri-o://e4ad5d2bbbea602d8a47588e6e54db41d5c9cd8ed843886bb586f20742c77c96" gracePeriod=10 Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.118823 4880 generic.go:334] "Generic (PLEG): container finished" podID="ba0767c5-9152-431e-b924-05ccd6875e08" containerID="e4ad5d2bbbea602d8a47588e6e54db41d5c9cd8ed843886bb586f20742c77c96" exitCode=0 Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.118890 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" event={"ID":"ba0767c5-9152-431e-b924-05ccd6875e08","Type":"ContainerDied","Data":"e4ad5d2bbbea602d8a47588e6e54db41d5c9cd8ed843886bb586f20742c77c96"} Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.121621 4880 generic.go:334] "Generic (PLEG): container finished" podID="f0910838-ee7d-4d85-973d-4d34d331e684" containerID="3af157b9be6e70652812f6f608cdd46d2fdf00287bbca11c1d4f7e89d54cd8c3" exitCode=0 Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.122396 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sn8nz" event={"ID":"f0910838-ee7d-4d85-973d-4d34d331e684","Type":"ContainerDied","Data":"3af157b9be6e70652812f6f608cdd46d2fdf00287bbca11c1d4f7e89d54cd8c3"} Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.176439 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.485828 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.518805 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-nb\") pod \"ba0767c5-9152-431e-b924-05ccd6875e08\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.518894 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-config\") pod \"ba0767c5-9152-431e-b924-05ccd6875e08\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.518921 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-swift-storage-0\") pod \"ba0767c5-9152-431e-b924-05ccd6875e08\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.518945 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddt9m\" (UniqueName: \"kubernetes.io/projected/ba0767c5-9152-431e-b924-05ccd6875e08-kube-api-access-ddt9m\") pod \"ba0767c5-9152-431e-b924-05ccd6875e08\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.518970 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-svc\") pod \"ba0767c5-9152-431e-b924-05ccd6875e08\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.519008 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-sb\") pod \"ba0767c5-9152-431e-b924-05ccd6875e08\" (UID: \"ba0767c5-9152-431e-b924-05ccd6875e08\") " Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.532317 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.532403 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.571008 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba0767c5-9152-431e-b924-05ccd6875e08-kube-api-access-ddt9m" (OuterVolumeSpecName: "kube-api-access-ddt9m") pod "ba0767c5-9152-431e-b924-05ccd6875e08" (UID: "ba0767c5-9152-431e-b924-05ccd6875e08"). InnerVolumeSpecName "kube-api-access-ddt9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.575851 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-config" (OuterVolumeSpecName: "config") pod "ba0767c5-9152-431e-b924-05ccd6875e08" (UID: "ba0767c5-9152-431e-b924-05ccd6875e08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.606694 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba0767c5-9152-431e-b924-05ccd6875e08" (UID: "ba0767c5-9152-431e-b924-05ccd6875e08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.621361 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.621387 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddt9m\" (UniqueName: \"kubernetes.io/projected/ba0767c5-9152-431e-b924-05ccd6875e08-kube-api-access-ddt9m\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.621396 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.626650 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ba0767c5-9152-431e-b924-05ccd6875e08" (UID: "ba0767c5-9152-431e-b924-05ccd6875e08"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.630546 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba0767c5-9152-431e-b924-05ccd6875e08" (UID: "ba0767c5-9152-431e-b924-05ccd6875e08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.636909 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba0767c5-9152-431e-b924-05ccd6875e08" (UID: "ba0767c5-9152-431e-b924-05ccd6875e08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.722682 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.722717 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:37 crc kubenswrapper[4880]: I1201 03:16:37.722727 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba0767c5-9152-431e-b924-05ccd6875e08-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.133143 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" event={"ID":"ba0767c5-9152-431e-b924-05ccd6875e08","Type":"ContainerDied","Data":"8ad2dfcace0580eb227ac718f406d7d31a96c1c3af447e8299c3545cf9e592c8"} Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.133195 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86459544c9-nrq5w" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.133211 4880 scope.go:117] "RemoveContainer" containerID="e4ad5d2bbbea602d8a47588e6e54db41d5c9cd8ed843886bb586f20742c77c96" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.168695 4880 scope.go:117] "RemoveContainer" containerID="b06c5124332cd92c00aeb5ee5b76fc68ab61976bb781f8c32502e65feaf8c508" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.249604 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86459544c9-nrq5w"] Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.262362 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86459544c9-nrq5w"] Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.551663 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.645184 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-config-data\") pod \"f0910838-ee7d-4d85-973d-4d34d331e684\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.645295 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sh59\" (UniqueName: \"kubernetes.io/projected/f0910838-ee7d-4d85-973d-4d34d331e684-kube-api-access-4sh59\") pod \"f0910838-ee7d-4d85-973d-4d34d331e684\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.645363 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-combined-ca-bundle\") pod \"f0910838-ee7d-4d85-973d-4d34d331e684\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.645593 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-scripts\") pod \"f0910838-ee7d-4d85-973d-4d34d331e684\" (UID: \"f0910838-ee7d-4d85-973d-4d34d331e684\") " Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.650250 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-scripts" (OuterVolumeSpecName: "scripts") pod "f0910838-ee7d-4d85-973d-4d34d331e684" (UID: "f0910838-ee7d-4d85-973d-4d34d331e684"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.653019 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0910838-ee7d-4d85-973d-4d34d331e684-kube-api-access-4sh59" (OuterVolumeSpecName: "kube-api-access-4sh59") pod "f0910838-ee7d-4d85-973d-4d34d331e684" (UID: "f0910838-ee7d-4d85-973d-4d34d331e684"). InnerVolumeSpecName "kube-api-access-4sh59". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.673915 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-config-data" (OuterVolumeSpecName: "config-data") pod "f0910838-ee7d-4d85-973d-4d34d331e684" (UID: "f0910838-ee7d-4d85-973d-4d34d331e684"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.691609 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0910838-ee7d-4d85-973d-4d34d331e684" (UID: "f0910838-ee7d-4d85-973d-4d34d331e684"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.747598 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.747635 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.747645 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sh59\" (UniqueName: \"kubernetes.io/projected/f0910838-ee7d-4d85-973d-4d34d331e684-kube-api-access-4sh59\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.747655 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0910838-ee7d-4d85-973d-4d34d331e684-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.749038 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.749089 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 03:16:38 crc kubenswrapper[4880]: I1201 03:16:38.793656 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba0767c5-9152-431e-b924-05ccd6875e08" path="/var/lib/kubelet/pods/ba0767c5-9152-431e-b924-05ccd6875e08/volumes" Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.143714 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sn8nz" event={"ID":"f0910838-ee7d-4d85-973d-4d34d331e684","Type":"ContainerDied","Data":"dea82aa6aab23129c0497c57fd7b189a5396d65ead426577f46b47d283c9923b"} Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.143920 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea82aa6aab23129c0497c57fd7b189a5396d65ead426577f46b47d283c9923b" Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.143966 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sn8nz" Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.146942 4880 generic.go:334] "Generic (PLEG): container finished" podID="a96025ed-9ab7-4d57-be27-b2cc9b1f5d72" containerID="b5ec7f12809be1c1adc7ec411288618cbebd970e10e05473f10057002a763eb1" exitCode=0 Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.147000 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-szfn6" event={"ID":"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72","Type":"ContainerDied","Data":"b5ec7f12809be1c1adc7ec411288618cbebd970e10e05473f10057002a763eb1"} Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.343071 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.343321 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-log" containerID="cri-o://2de4aea1c9158835b392bcd2462f5d5f849e27590ff7f6c5a176f1ea0e19ba0f" gracePeriod=30 Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.343406 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-api" containerID="cri-o://d410b2404658c69bf4a4a3ec97fa51807261e5bc600a6be148d1b4c88e93e6ca" gracePeriod=30 Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.368154 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.387321 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.387516 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerName="nova-metadata-log" containerID="cri-o://3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038" gracePeriod=30 Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.387901 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerName="nova-metadata-metadata" containerID="cri-o://879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b" gracePeriod=30 Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.562702 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0910838_ee7d_4d85_973d_4d34d331e684.slice/crio-conmon-3af157b9be6e70652812f6f608cdd46d2fdf00287bbca11c1d4f7e89d54cd8c3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0910838_ee7d_4d85_973d_4d34d331e684.slice/crio-conmon-3af157b9be6e70652812f6f608cdd46d2fdf00287bbca11c1d4f7e89d54cd8c3.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.562790 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0910838_ee7d_4d85_973d_4d34d331e684.slice/crio-3af157b9be6e70652812f6f608cdd46d2fdf00287bbca11c1d4f7e89d54cd8c3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0910838_ee7d_4d85_973d_4d34d331e684.slice/crio-3af157b9be6e70652812f6f608cdd46d2fdf00287bbca11c1d4f7e89d54cd8c3.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.564465 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-337d4288439d11150272aaad9474592800c581b4ff9e92efa5d42b4872c445a5": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-337d4288439d11150272aaad9474592800c581b4ff9e92efa5d42b4872c445a5: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.564491 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c59659f_7a35_4df4_8816_4c48a175e7a4.slice/crio-conmon-1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c59659f_7a35_4df4_8816_4c48a175e7a4.slice/crio-conmon-1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.564524 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c59659f_7a35_4df4_8816_4c48a175e7a4.slice/crio-1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c59659f_7a35_4df4_8816_4c48a175e7a4.slice/crio-1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.569823 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61218a6_8e9b_4255_9fa7_5212e1ef30b5.slice/crio-conmon-2de4aea1c9158835b392bcd2462f5d5f849e27590ff7f6c5a176f1ea0e19ba0f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61218a6_8e9b_4255_9fa7_5212e1ef30b5.slice/crio-conmon-2de4aea1c9158835b392bcd2462f5d5f849e27590ff7f6c5a176f1ea0e19ba0f.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.569992 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-conmon-61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-conmon-61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.570931 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-61ca28b3d10a979b0d7dee7cd1042ceab19cf4b4fd53891b61e4838ee8920d70.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.570966 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61218a6_8e9b_4255_9fa7_5212e1ef30b5.slice/crio-2de4aea1c9158835b392bcd2462f5d5f849e27590ff7f6c5a176f1ea0e19ba0f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61218a6_8e9b_4255_9fa7_5212e1ef30b5.slice/crio-2de4aea1c9158835b392bcd2462f5d5f849e27590ff7f6c5a176f1ea0e19ba0f.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.570980 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-conmon-dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-conmon-dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: W1201 03:16:39.570994 4880 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418e8113_9ac3_470d_81be_462508201cf8.slice/crio-dbeb8ef3195a9bbed277e9a9a6f1dbd542dbc231c9ce0207e57a499358a4e1dd.scope: no such file or directory Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.910182 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.975260 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.989223 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-combined-ca-bundle\") pod \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.989320 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-sg-core-conf-yaml\") pod \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.989402 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-scripts\") pod \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.989468 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgcdn\" (UniqueName: \"kubernetes.io/projected/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-kube-api-access-lgcdn\") pod \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.989500 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-config-data\") pod \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.989545 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-log-httpd\") pod \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.989608 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-run-httpd\") pod \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\" (UID: \"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772\") " Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.990348 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" (UID: "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.993178 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" (UID: "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:39 crc kubenswrapper[4880]: I1201 03:16:39.999052 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-kube-api-access-lgcdn" (OuterVolumeSpecName: "kube-api-access-lgcdn") pod "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" (UID: "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772"). InnerVolumeSpecName "kube-api-access-lgcdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.000988 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-scripts" (OuterVolumeSpecName: "scripts") pod "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" (UID: "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.075771 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" (UID: "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.096688 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-config-data\") pod \"3ceebad3-8008-4756-a34d-ff98489fe8f8\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.098888 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceebad3-8008-4756-a34d-ff98489fe8f8-logs\") pod \"3ceebad3-8008-4756-a34d-ff98489fe8f8\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.098960 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-nova-metadata-tls-certs\") pod \"3ceebad3-8008-4756-a34d-ff98489fe8f8\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.099024 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x9qh\" (UniqueName: \"kubernetes.io/projected/3ceebad3-8008-4756-a34d-ff98489fe8f8-kube-api-access-6x9qh\") pod \"3ceebad3-8008-4756-a34d-ff98489fe8f8\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.099060 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-combined-ca-bundle\") pod \"3ceebad3-8008-4756-a34d-ff98489fe8f8\" (UID: \"3ceebad3-8008-4756-a34d-ff98489fe8f8\") " Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.099641 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ceebad3-8008-4756-a34d-ff98489fe8f8-logs" (OuterVolumeSpecName: "logs") pod "3ceebad3-8008-4756-a34d-ff98489fe8f8" (UID: "3ceebad3-8008-4756-a34d-ff98489fe8f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.100193 4880 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.100212 4880 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.100220 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceebad3-8008-4756-a34d-ff98489fe8f8-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.100228 4880 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.100272 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.100280 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgcdn\" (UniqueName: \"kubernetes.io/projected/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-kube-api-access-lgcdn\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.108195 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ceebad3-8008-4756-a34d-ff98489fe8f8-kube-api-access-6x9qh" (OuterVolumeSpecName: "kube-api-access-6x9qh") pod "3ceebad3-8008-4756-a34d-ff98489fe8f8" (UID: "3ceebad3-8008-4756-a34d-ff98489fe8f8"). InnerVolumeSpecName "kube-api-access-6x9qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.128981 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-config-data" (OuterVolumeSpecName: "config-data") pod "3ceebad3-8008-4756-a34d-ff98489fe8f8" (UID: "3ceebad3-8008-4756-a34d-ff98489fe8f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.145893 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ceebad3-8008-4756-a34d-ff98489fe8f8" (UID: "3ceebad3-8008-4756-a34d-ff98489fe8f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.154799 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" (UID: "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.181109 4880 generic.go:334] "Generic (PLEG): container finished" podID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerID="2de4aea1c9158835b392bcd2462f5d5f849e27590ff7f6c5a176f1ea0e19ba0f" exitCode=143 Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.181183 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d61218a6-8e9b-4255-9fa7-5212e1ef30b5","Type":"ContainerDied","Data":"2de4aea1c9158835b392bcd2462f5d5f849e27590ff7f6c5a176f1ea0e19ba0f"} Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.208007 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x9qh\" (UniqueName: \"kubernetes.io/projected/3ceebad3-8008-4756-a34d-ff98489fe8f8-kube-api-access-6x9qh\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.208036 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.208046 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.208056 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.213655 4880 generic.go:334] "Generic (PLEG): container finished" podID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerID="879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b" exitCode=0 Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.213698 4880 generic.go:334] "Generic (PLEG): container finished" podID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerID="3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038" exitCode=143 Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.213744 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceebad3-8008-4756-a34d-ff98489fe8f8","Type":"ContainerDied","Data":"879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b"} Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.213775 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceebad3-8008-4756-a34d-ff98489fe8f8","Type":"ContainerDied","Data":"3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038"} Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.213788 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceebad3-8008-4756-a34d-ff98489fe8f8","Type":"ContainerDied","Data":"d7948bad8c7017eb20aa5ad29c00ccc4807e10923d0bbf7f3c07fa1a089f0f81"} Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.213808 4880 scope.go:117] "RemoveContainer" containerID="879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.213939 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.214153 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-config-data" (OuterVolumeSpecName: "config-data") pod "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" (UID: "8bd14ca2-35c1-4a1a-bb66-1dcb9290a772"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.266284 4880 generic.go:334] "Generic (PLEG): container finished" podID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerID="1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055" exitCode=137 Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.266514 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.266970 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerDied","Data":"1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055"} Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.267043 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd14ca2-35c1-4a1a-bb66-1dcb9290a772","Type":"ContainerDied","Data":"73d6d222dbcb8e2961938f590d9471c635e0f0335fd3be0f21fd56f589d08aa1"} Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.268262 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="59afeb64-c66b-4908-b0d7-1099ec2dd375" containerName="nova-scheduler-scheduler" containerID="cri-o://dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c" gracePeriod=30 Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.280095 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3ceebad3-8008-4756-a34d-ff98489fe8f8" (UID: "3ceebad3-8008-4756-a34d-ff98489fe8f8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.310985 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.311024 4880 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ceebad3-8008-4756-a34d-ff98489fe8f8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.327557 4880 scope.go:117] "RemoveContainer" containerID="3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.350073 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.370200 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.371632 4880 scope.go:117] "RemoveContainer" containerID="879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.377110 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b\": container with ID starting with 879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b not found: ID does not exist" containerID="879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.377154 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b"} err="failed to get container status \"879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b\": rpc error: code = NotFound desc = could not find container \"879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b\": container with ID starting with 879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b not found: ID does not exist" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.377182 4880 scope.go:117] "RemoveContainer" containerID="3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.378748 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038\": container with ID starting with 3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038 not found: ID does not exist" containerID="3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.378773 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038"} err="failed to get container status \"3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038\": rpc error: code = NotFound desc = could not find container \"3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038\": container with ID starting with 3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038 not found: ID does not exist" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.378787 4880 scope.go:117] "RemoveContainer" containerID="879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.381949 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b"} err="failed to get container status \"879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b\": rpc error: code = NotFound desc = could not find container \"879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b\": container with ID starting with 879e16e3e943d210ba278f8211bb5f73768dad76b690ad7b2c575225bdd44d1b not found: ID does not exist" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.381978 4880 scope.go:117] "RemoveContainer" containerID="3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.386052 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038"} err="failed to get container status \"3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038\": rpc error: code = NotFound desc = could not find container \"3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038\": container with ID starting with 3e0f591aab4e6c4cd586bb14d16486c1ffddc020a12a81b75445ff8660c45038 not found: ID does not exist" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.386111 4880 scope.go:117] "RemoveContainer" containerID="1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.396404 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.396995 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="ceilometer-central-agent" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397013 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="ceilometer-central-agent" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.397031 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerName="nova-metadata-log" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397038 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerName="nova-metadata-log" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.397123 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0910838-ee7d-4d85-973d-4d34d331e684" containerName="nova-manage" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397131 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0910838-ee7d-4d85-973d-4d34d331e684" containerName="nova-manage" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.397141 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerName="nova-metadata-metadata" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397149 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerName="nova-metadata-metadata" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.397160 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0767c5-9152-431e-b924-05ccd6875e08" containerName="dnsmasq-dns" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397166 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0767c5-9152-431e-b924-05ccd6875e08" containerName="dnsmasq-dns" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.397173 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="sg-core" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397179 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="sg-core" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.397192 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="ceilometer-notification-agent" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397199 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="ceilometer-notification-agent" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.397210 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="proxy-httpd" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397216 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="proxy-httpd" Dec 01 03:16:40 crc kubenswrapper[4880]: E1201 03:16:40.397228 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0767c5-9152-431e-b924-05ccd6875e08" containerName="init" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397234 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0767c5-9152-431e-b924-05ccd6875e08" containerName="init" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397476 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerName="nova-metadata-metadata" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397489 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="ceilometer-central-agent" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397499 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="proxy-httpd" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397505 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="sg-core" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397520 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba0767c5-9152-431e-b924-05ccd6875e08" containerName="dnsmasq-dns" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397533 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0910838-ee7d-4d85-973d-4d34d331e684" containerName="nova-manage" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397546 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ceebad3-8008-4756-a34d-ff98489fe8f8" containerName="nova-metadata-log" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.397554 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" containerName="ceilometer-notification-agent" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.414643 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.420113 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.420316 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.459958 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.519138 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.519219 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9gr\" (UniqueName: \"kubernetes.io/projected/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-kube-api-access-pd9gr\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.519248 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-run-httpd\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.519268 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-log-httpd\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.519289 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-scripts\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.519339 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.519361 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-config-data\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.549290 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.557969 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.580849 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.586566 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.588341 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.588541 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.596066 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.620769 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-logs\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.620839 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.620863 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-config-data\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.620927 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.620948 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wd5f\" (UniqueName: \"kubernetes.io/projected/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-kube-api-access-9wd5f\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.621003 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.621038 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-config-data\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.621062 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.621084 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd9gr\" (UniqueName: \"kubernetes.io/projected/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-kube-api-access-pd9gr\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.621105 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-run-httpd\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.621124 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-log-httpd\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.621149 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-scripts\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.622014 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-run-httpd\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.622067 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-log-httpd\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.624577 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-scripts\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.625065 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.625241 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.626127 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-config-data\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.635516 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd9gr\" (UniqueName: \"kubernetes.io/projected/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-kube-api-access-pd9gr\") pod \"ceilometer-0\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.722414 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-config-data\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.723435 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.723618 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-logs\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.723960 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.724103 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wd5f\" (UniqueName: \"kubernetes.io/projected/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-kube-api-access-9wd5f\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.724104 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-logs\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.725591 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-config-data\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.727175 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.735411 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.737864 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wd5f\" (UniqueName: \"kubernetes.io/projected/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-kube-api-access-9wd5f\") pod \"nova-metadata-0\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.798370 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ceebad3-8008-4756-a34d-ff98489fe8f8" path="/var/lib/kubelet/pods/3ceebad3-8008-4756-a34d-ff98489fe8f8/volumes" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.804081 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bd14ca2-35c1-4a1a-bb66-1dcb9290a772" path="/var/lib/kubelet/pods/8bd14ca2-35c1-4a1a-bb66-1dcb9290a772/volumes" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.814029 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.886024 4880 scope.go:117] "RemoveContainer" containerID="babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.905729 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.925833 4880 scope.go:117] "RemoveContainer" containerID="a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044" Dec 01 03:16:40 crc kubenswrapper[4880]: I1201 03:16:40.980071 4880 scope.go:117] "RemoveContainer" containerID="3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.101223 4880 scope.go:117] "RemoveContainer" containerID="1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055" Dec 01 03:16:41 crc kubenswrapper[4880]: E1201 03:16:41.103681 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055\": container with ID starting with 1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055 not found: ID does not exist" containerID="1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.103724 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055"} err="failed to get container status \"1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055\": rpc error: code = NotFound desc = could not find container \"1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055\": container with ID starting with 1328055725b3f69df6cef7842e95835ada9ca4c1930e95402f4c908c3c426055 not found: ID does not exist" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.103762 4880 scope.go:117] "RemoveContainer" containerID="babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea" Dec 01 03:16:41 crc kubenswrapper[4880]: E1201 03:16:41.105223 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea\": container with ID starting with babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea not found: ID does not exist" containerID="babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.105250 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea"} err="failed to get container status \"babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea\": rpc error: code = NotFound desc = could not find container \"babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea\": container with ID starting with babb0093d908afa9212a9f690643ee0c04c32533606c0b0770d149d7257657ea not found: ID does not exist" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.105273 4880 scope.go:117] "RemoveContainer" containerID="a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044" Dec 01 03:16:41 crc kubenswrapper[4880]: E1201 03:16:41.109479 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044\": container with ID starting with a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044 not found: ID does not exist" containerID="a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.109501 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044"} err="failed to get container status \"a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044\": rpc error: code = NotFound desc = could not find container \"a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044\": container with ID starting with a766d5d5e1a3a0ad77c198c179017de1c6d1ce81a3d282fe62fd7cd8e9da3044 not found: ID does not exist" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.109515 4880 scope.go:117] "RemoveContainer" containerID="3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4" Dec 01 03:16:41 crc kubenswrapper[4880]: E1201 03:16:41.111157 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4\": container with ID starting with 3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4 not found: ID does not exist" containerID="3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.111173 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4"} err="failed to get container status \"3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4\": rpc error: code = NotFound desc = could not find container \"3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4\": container with ID starting with 3ca7f85dca3abd6926195dfb6f9ff3ed59dddb4bdb97869f6896b346e4ce6ce4 not found: ID does not exist" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.207921 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.289780 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-szfn6" event={"ID":"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72","Type":"ContainerDied","Data":"ce21ae9d1a9c6310e7db15b8d4c0d980bcaffe96833b50646f00eb350aaee67b"} Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.289820 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce21ae9d1a9c6310e7db15b8d4c0d980bcaffe96833b50646f00eb350aaee67b" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.289794 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-szfn6" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.334003 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-config-data\") pod \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.334086 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjbwx\" (UniqueName: \"kubernetes.io/projected/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-kube-api-access-wjbwx\") pod \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.334184 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-combined-ca-bundle\") pod \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.334318 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-scripts\") pod \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\" (UID: \"a96025ed-9ab7-4d57-be27-b2cc9b1f5d72\") " Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.355249 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-kube-api-access-wjbwx" (OuterVolumeSpecName: "kube-api-access-wjbwx") pod "a96025ed-9ab7-4d57-be27-b2cc9b1f5d72" (UID: "a96025ed-9ab7-4d57-be27-b2cc9b1f5d72"). InnerVolumeSpecName "kube-api-access-wjbwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.374051 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-scripts" (OuterVolumeSpecName: "scripts") pod "a96025ed-9ab7-4d57-be27-b2cc9b1f5d72" (UID: "a96025ed-9ab7-4d57-be27-b2cc9b1f5d72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.375693 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a96025ed-9ab7-4d57-be27-b2cc9b1f5d72" (UID: "a96025ed-9ab7-4d57-be27-b2cc9b1f5d72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.378122 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-config-data" (OuterVolumeSpecName: "config-data") pod "a96025ed-9ab7-4d57-be27-b2cc9b1f5d72" (UID: "a96025ed-9ab7-4d57-be27-b2cc9b1f5d72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.435271 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.436237 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.436271 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjbwx\" (UniqueName: \"kubernetes.io/projected/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-kube-api-access-wjbwx\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.436301 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.436313 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:41 crc kubenswrapper[4880]: E1201 03:16:41.534670 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 03:16:41 crc kubenswrapper[4880]: E1201 03:16:41.535657 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 03:16:41 crc kubenswrapper[4880]: E1201 03:16:41.536840 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 03:16:41 crc kubenswrapper[4880]: E1201 03:16:41.536928 4880 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="59afeb64-c66b-4908-b0d7-1099ec2dd375" containerName="nova-scheduler-scheduler" Dec 01 03:16:41 crc kubenswrapper[4880]: I1201 03:16:41.577522 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.290623 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 03:16:42 crc kubenswrapper[4880]: E1201 03:16:42.297327 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a96025ed-9ab7-4d57-be27-b2cc9b1f5d72" containerName="nova-cell1-conductor-db-sync" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.297351 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a96025ed-9ab7-4d57-be27-b2cc9b1f5d72" containerName="nova-cell1-conductor-db-sync" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.297540 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a96025ed-9ab7-4d57-be27-b2cc9b1f5d72" containerName="nova-cell1-conductor-db-sync" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.298185 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.306233 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.306716 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.316697 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e9b3c27-b580-4a32-88ce-ada9ffb57f79","Type":"ContainerStarted","Data":"c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286"} Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.316755 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e9b3c27-b580-4a32-88ce-ada9ffb57f79","Type":"ContainerStarted","Data":"f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f"} Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.316768 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e9b3c27-b580-4a32-88ce-ada9ffb57f79","Type":"ContainerStarted","Data":"c00bf9c534c61b44290f4c1d7b70c3d2e9fdc3b4aff8d166c03ba41fb601f420"} Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.329665 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerStarted","Data":"d30a068ccbacd60d8ad826db90228433c3e3fb06ed1f1c5def7b2a6b98c285c6"} Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.329716 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerStarted","Data":"4897d172d15570dd00cc7208f1b917d56f54fa32526a5806d52db8aecc76b9b5"} Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.329727 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerStarted","Data":"24ab510c8c9d0c7b57a4c6046f9418328f8dd96e5e683dad81b0e85aac647d97"} Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.353058 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b25735-c93b-40e3-909b-532f75bb88b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3b25735-c93b-40e3-909b-532f75bb88b2\") " pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.353369 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27bhh\" (UniqueName: \"kubernetes.io/projected/d3b25735-c93b-40e3-909b-532f75bb88b2-kube-api-access-27bhh\") pod \"nova-cell1-conductor-0\" (UID: \"d3b25735-c93b-40e3-909b-532f75bb88b2\") " pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.353577 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b25735-c93b-40e3-909b-532f75bb88b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3b25735-c93b-40e3-909b-532f75bb88b2\") " pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.353980 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.353965486 podStartE2EDuration="2.353965486s" podCreationTimestamp="2025-12-01 03:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:42.351818084 +0000 UTC m=+1231.863072486" watchObservedRunningTime="2025-12-01 03:16:42.353965486 +0000 UTC m=+1231.865219858" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.454814 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b25735-c93b-40e3-909b-532f75bb88b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3b25735-c93b-40e3-909b-532f75bb88b2\") " pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.454979 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b25735-c93b-40e3-909b-532f75bb88b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3b25735-c93b-40e3-909b-532f75bb88b2\") " pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.455025 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27bhh\" (UniqueName: \"kubernetes.io/projected/d3b25735-c93b-40e3-909b-532f75bb88b2-kube-api-access-27bhh\") pod \"nova-cell1-conductor-0\" (UID: \"d3b25735-c93b-40e3-909b-532f75bb88b2\") " pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.458049 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b25735-c93b-40e3-909b-532f75bb88b2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3b25735-c93b-40e3-909b-532f75bb88b2\") " pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.472003 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b25735-c93b-40e3-909b-532f75bb88b2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3b25735-c93b-40e3-909b-532f75bb88b2\") " pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.477994 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27bhh\" (UniqueName: \"kubernetes.io/projected/d3b25735-c93b-40e3-909b-532f75bb88b2-kube-api-access-27bhh\") pod \"nova-cell1-conductor-0\" (UID: \"d3b25735-c93b-40e3-909b-532f75bb88b2\") " pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:42 crc kubenswrapper[4880]: I1201 03:16:42.626229 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.221407 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.339614 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerStarted","Data":"3404d6776b7d34ab1d53bfc7a51cb49621d98362a84b19338432a2ff8895162b"} Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.345795 4880 generic.go:334] "Generic (PLEG): container finished" podID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerID="d410b2404658c69bf4a4a3ec97fa51807261e5bc600a6be148d1b4c88e93e6ca" exitCode=0 Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.345842 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d61218a6-8e9b-4255-9fa7-5212e1ef30b5","Type":"ContainerDied","Data":"d410b2404658c69bf4a4a3ec97fa51807261e5bc600a6be148d1b4c88e93e6ca"} Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.345859 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d61218a6-8e9b-4255-9fa7-5212e1ef30b5","Type":"ContainerDied","Data":"68c827d605ce27db56434478dad7cb81d55bd639f3cc0182a6c29bf64f2c996c"} Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.345880 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68c827d605ce27db56434478dad7cb81d55bd639f3cc0182a6c29bf64f2c996c" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.347717 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3b25735-c93b-40e3-909b-532f75bb88b2","Type":"ContainerStarted","Data":"25f95b2535beb3a5225718134ab9142c83ac4ffe6bb41e6a097d828654f36771"} Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.486504 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.574885 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-config-data\") pod \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.575165 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-logs\") pod \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.575190 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwvx9\" (UniqueName: \"kubernetes.io/projected/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-kube-api-access-mwvx9\") pod \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.575235 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-combined-ca-bundle\") pod \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\" (UID: \"d61218a6-8e9b-4255-9fa7-5212e1ef30b5\") " Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.575642 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-logs" (OuterVolumeSpecName: "logs") pod "d61218a6-8e9b-4255-9fa7-5212e1ef30b5" (UID: "d61218a6-8e9b-4255-9fa7-5212e1ef30b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.581222 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-kube-api-access-mwvx9" (OuterVolumeSpecName: "kube-api-access-mwvx9") pod "d61218a6-8e9b-4255-9fa7-5212e1ef30b5" (UID: "d61218a6-8e9b-4255-9fa7-5212e1ef30b5"). InnerVolumeSpecName "kube-api-access-mwvx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.608394 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-config-data" (OuterVolumeSpecName: "config-data") pod "d61218a6-8e9b-4255-9fa7-5212e1ef30b5" (UID: "d61218a6-8e9b-4255-9fa7-5212e1ef30b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.611706 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d61218a6-8e9b-4255-9fa7-5212e1ef30b5" (UID: "d61218a6-8e9b-4255-9fa7-5212e1ef30b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.676669 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.677037 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.677048 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwvx9\" (UniqueName: \"kubernetes.io/projected/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-kube-api-access-mwvx9\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.677059 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d61218a6-8e9b-4255-9fa7-5212e1ef30b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.753702 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.777769 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-combined-ca-bundle\") pod \"59afeb64-c66b-4908-b0d7-1099ec2dd375\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.777856 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh8j2\" (UniqueName: \"kubernetes.io/projected/59afeb64-c66b-4908-b0d7-1099ec2dd375-kube-api-access-rh8j2\") pod \"59afeb64-c66b-4908-b0d7-1099ec2dd375\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.778054 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-config-data\") pod \"59afeb64-c66b-4908-b0d7-1099ec2dd375\" (UID: \"59afeb64-c66b-4908-b0d7-1099ec2dd375\") " Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.817017 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59afeb64-c66b-4908-b0d7-1099ec2dd375-kube-api-access-rh8j2" (OuterVolumeSpecName: "kube-api-access-rh8j2") pod "59afeb64-c66b-4908-b0d7-1099ec2dd375" (UID: "59afeb64-c66b-4908-b0d7-1099ec2dd375"). InnerVolumeSpecName "kube-api-access-rh8j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.822184 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-config-data" (OuterVolumeSpecName: "config-data") pod "59afeb64-c66b-4908-b0d7-1099ec2dd375" (UID: "59afeb64-c66b-4908-b0d7-1099ec2dd375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.873855 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59afeb64-c66b-4908-b0d7-1099ec2dd375" (UID: "59afeb64-c66b-4908-b0d7-1099ec2dd375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.880103 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.880236 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh8j2\" (UniqueName: \"kubernetes.io/projected/59afeb64-c66b-4908-b0d7-1099ec2dd375-kube-api-access-rh8j2\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:43 crc kubenswrapper[4880]: I1201 03:16:43.880334 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59afeb64-c66b-4908-b0d7-1099ec2dd375-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.355767 4880 generic.go:334] "Generic (PLEG): container finished" podID="59afeb64-c66b-4908-b0d7-1099ec2dd375" containerID="dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c" exitCode=0 Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.355840 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59afeb64-c66b-4908-b0d7-1099ec2dd375","Type":"ContainerDied","Data":"dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c"} Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.356097 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59afeb64-c66b-4908-b0d7-1099ec2dd375","Type":"ContainerDied","Data":"a85e5f12069507374a4701aa05c698de4aa625737c4be3a07d50806d026177a3"} Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.356112 4880 scope.go:117] "RemoveContainer" containerID="dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.355863 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.358042 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3b25735-c93b-40e3-909b-532f75bb88b2","Type":"ContainerStarted","Data":"6b306c0cbf5d3c943b8f211ec72525f7726dd83aae8153a711ac27e521400f45"} Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.358357 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.360455 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.360463 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerStarted","Data":"79da9fa97173d91fbdf6d727d8ab6b4c82096a8cb38ad0a37dc6171d2c2a8bac"} Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.360668 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.396592 4880 scope.go:117] "RemoveContainer" containerID="dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c" Dec 01 03:16:44 crc kubenswrapper[4880]: E1201 03:16:44.397224 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c\": container with ID starting with dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c not found: ID does not exist" containerID="dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.397350 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c"} err="failed to get container status \"dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c\": rpc error: code = NotFound desc = could not find container \"dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c\": container with ID starting with dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c not found: ID does not exist" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.401739 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.401724439 podStartE2EDuration="2.401724439s" podCreationTimestamp="2025-12-01 03:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:44.394488082 +0000 UTC m=+1233.905742454" watchObservedRunningTime="2025-12-01 03:16:44.401724439 +0000 UTC m=+1233.912978811" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.434681 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.09227027 podStartE2EDuration="4.434665713s" podCreationTimestamp="2025-12-01 03:16:40 +0000 UTC" firstStartedPulling="2025-12-01 03:16:41.433387181 +0000 UTC m=+1230.944641553" lastFinishedPulling="2025-12-01 03:16:43.775782624 +0000 UTC m=+1233.287036996" observedRunningTime="2025-12-01 03:16:44.427051117 +0000 UTC m=+1233.938305489" watchObservedRunningTime="2025-12-01 03:16:44.434665713 +0000 UTC m=+1233.945920085" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.464186 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.474645 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.487752 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:16:44 crc kubenswrapper[4880]: E1201 03:16:44.488152 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-log" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.488169 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-log" Dec 01 03:16:44 crc kubenswrapper[4880]: E1201 03:16:44.488187 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59afeb64-c66b-4908-b0d7-1099ec2dd375" containerName="nova-scheduler-scheduler" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.488193 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="59afeb64-c66b-4908-b0d7-1099ec2dd375" containerName="nova-scheduler-scheduler" Dec 01 03:16:44 crc kubenswrapper[4880]: E1201 03:16:44.488212 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-api" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.488218 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-api" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.488399 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-log" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.488424 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="59afeb64-c66b-4908-b0d7-1099ec2dd375" containerName="nova-scheduler-scheduler" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.488437 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" containerName="nova-api-api" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.489037 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.497977 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.503479 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.532104 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.545983 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.565477 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.566928 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.571279 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.592969 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.593013 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fd4377-fc6f-43db-a0be-846fa266fe32-logs\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.593111 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-config-data\") pod \"nova-scheduler-0\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.593140 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blm2p\" (UniqueName: \"kubernetes.io/projected/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-kube-api-access-blm2p\") pod \"nova-scheduler-0\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.593226 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-config-data\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.593245 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwd6\" (UniqueName: \"kubernetes.io/projected/e0fd4377-fc6f-43db-a0be-846fa266fe32-kube-api-access-jdwd6\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.593269 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.601951 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.697132 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-config-data\") pod \"nova-scheduler-0\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.697204 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blm2p\" (UniqueName: \"kubernetes.io/projected/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-kube-api-access-blm2p\") pod \"nova-scheduler-0\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.697309 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-config-data\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.697325 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwd6\" (UniqueName: \"kubernetes.io/projected/e0fd4377-fc6f-43db-a0be-846fa266fe32-kube-api-access-jdwd6\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.697353 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.697391 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.697417 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fd4377-fc6f-43db-a0be-846fa266fe32-logs\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.697832 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fd4377-fc6f-43db-a0be-846fa266fe32-logs\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.739109 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.739174 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-config-data\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.739625 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-config-data\") pod \"nova-scheduler-0\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.739755 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.744324 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwd6\" (UniqueName: \"kubernetes.io/projected/e0fd4377-fc6f-43db-a0be-846fa266fe32-kube-api-access-jdwd6\") pod \"nova-api-0\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " pod="openstack/nova-api-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.755399 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blm2p\" (UniqueName: \"kubernetes.io/projected/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-kube-api-access-blm2p\") pod \"nova-scheduler-0\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.793540 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59afeb64-c66b-4908-b0d7-1099ec2dd375" path="/var/lib/kubelet/pods/59afeb64-c66b-4908-b0d7-1099ec2dd375/volumes" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.794411 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61218a6-8e9b-4255-9fa7-5212e1ef30b5" path="/var/lib/kubelet/pods/d61218a6-8e9b-4255-9fa7-5212e1ef30b5/volumes" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.819064 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:16:44 crc kubenswrapper[4880]: I1201 03:16:44.893040 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:16:45 crc kubenswrapper[4880]: I1201 03:16:45.344746 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:16:45 crc kubenswrapper[4880]: I1201 03:16:45.372145 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3","Type":"ContainerStarted","Data":"809f323702e4e7947da45606cbd0d6f869a6df706e08778500f5dffd7f8a8e2f"} Dec 01 03:16:45 crc kubenswrapper[4880]: I1201 03:16:45.476650 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:16:45 crc kubenswrapper[4880]: I1201 03:16:45.906675 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 03:16:45 crc kubenswrapper[4880]: I1201 03:16:45.907379 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 03:16:46 crc kubenswrapper[4880]: I1201 03:16:46.387233 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3","Type":"ContainerStarted","Data":"25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de"} Dec 01 03:16:46 crc kubenswrapper[4880]: I1201 03:16:46.396475 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0fd4377-fc6f-43db-a0be-846fa266fe32","Type":"ContainerStarted","Data":"9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69"} Dec 01 03:16:46 crc kubenswrapper[4880]: I1201 03:16:46.396520 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0fd4377-fc6f-43db-a0be-846fa266fe32","Type":"ContainerStarted","Data":"2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201"} Dec 01 03:16:46 crc kubenswrapper[4880]: I1201 03:16:46.396536 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0fd4377-fc6f-43db-a0be-846fa266fe32","Type":"ContainerStarted","Data":"a7bd4d8ac4172b4776d6dbbf8a12d1be7255bd8ccdff1a4bc0726de91499df65"} Dec 01 03:16:46 crc kubenswrapper[4880]: I1201 03:16:46.415189 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.4151743740000002 podStartE2EDuration="2.415174374s" podCreationTimestamp="2025-12-01 03:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:46.4092887 +0000 UTC m=+1235.920543082" watchObservedRunningTime="2025-12-01 03:16:46.415174374 +0000 UTC m=+1235.926428746" Dec 01 03:16:46 crc kubenswrapper[4880]: I1201 03:16:46.454814 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.45478612 podStartE2EDuration="2.45478612s" podCreationTimestamp="2025-12-01 03:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:16:46.440888401 +0000 UTC m=+1235.952142783" watchObservedRunningTime="2025-12-01 03:16:46.45478612 +0000 UTC m=+1235.966040512" Dec 01 03:16:49 crc kubenswrapper[4880]: I1201 03:16:49.819943 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 03:16:50 crc kubenswrapper[4880]: I1201 03:16:50.907108 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 03:16:50 crc kubenswrapper[4880]: I1201 03:16:50.907160 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 03:16:51 crc kubenswrapper[4880]: I1201 03:16:51.924209 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 03:16:51 crc kubenswrapper[4880]: I1201 03:16:51.924209 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:16:52 crc kubenswrapper[4880]: I1201 03:16:52.663687 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 01 03:16:54 crc kubenswrapper[4880]: I1201 03:16:54.819220 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 03:16:54 crc kubenswrapper[4880]: I1201 03:16:54.868571 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 03:16:54 crc kubenswrapper[4880]: I1201 03:16:54.894101 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 03:16:54 crc kubenswrapper[4880]: I1201 03:16:54.894148 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 03:16:55 crc kubenswrapper[4880]: I1201 03:16:55.550607 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 03:16:55 crc kubenswrapper[4880]: I1201 03:16:55.977253 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 03:16:55 crc kubenswrapper[4880]: I1201 03:16:55.977234 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 03:17:00 crc kubenswrapper[4880]: I1201 03:17:00.916551 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 03:17:00 crc kubenswrapper[4880]: I1201 03:17:00.917976 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 03:17:00 crc kubenswrapper[4880]: I1201 03:17:00.931007 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 03:17:01 crc kubenswrapper[4880]: I1201 03:17:01.585462 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 03:17:02 crc kubenswrapper[4880]: W1201 03:17:02.057117 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59afeb64_c66b_4908_b0d7_1099ec2dd375.slice/crio-dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c.scope WatchSource:0}: Error finding container dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c: Status 404 returned error can't find the container with id dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c Dec 01 03:17:02 crc kubenswrapper[4880]: E1201 03:17:02.365952 4880 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59afeb64_c66b_4908_b0d7_1099ec2dd375.slice/crio-conmon-dadb85dde044faa5416188755cdd2bdb45a243f4879c2f2dd6c352edf3a8827c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61218a6_8e9b_4255_9fa7_5212e1ef30b5.slice/crio-d410b2404658c69bf4a4a3ec97fa51807261e5bc600a6be148d1b4c88e93e6ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83487d74_fdd0_493c_8b33_5de13ec1ed53.slice/crio-conmon-042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda96025ed_9ab7_4d57_be27_b2cc9b1f5d72.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61218a6_8e9b_4255_9fa7_5212e1ef30b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59afeb64_c66b_4908_b0d7_1099ec2dd375.slice/crio-a85e5f12069507374a4701aa05c698de4aa625737c4be3a07d50806d026177a3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda96025ed_9ab7_4d57_be27_b2cc9b1f5d72.slice/crio-ce21ae9d1a9c6310e7db15b8d4c0d980bcaffe96833b50646f00eb350aaee67b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ceebad3_8008_4756_a34d_ff98489fe8f8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ceebad3_8008_4756_a34d_ff98489fe8f8.slice/crio-d7948bad8c7017eb20aa5ad29c00ccc4807e10923d0bbf7f3c07fa1a089f0f81\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59afeb64_c66b_4908_b0d7_1099ec2dd375.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83487d74_fdd0_493c_8b33_5de13ec1ed53.slice/crio-042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61218a6_8e9b_4255_9fa7_5212e1ef30b5.slice/crio-conmon-d410b2404658c69bf4a4a3ec97fa51807261e5bc600a6be148d1b4c88e93e6ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61218a6_8e9b_4255_9fa7_5212e1ef30b5.slice/crio-68c827d605ce27db56434478dad7cb81d55bd639f3cc0182a6c29bf64f2c996c\": RecentStats: unable to find data in memory cache]" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.402185 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.594995 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdmb5\" (UniqueName: \"kubernetes.io/projected/83487d74-fdd0-493c-8b33-5de13ec1ed53-kube-api-access-xdmb5\") pod \"83487d74-fdd0-493c-8b33-5de13ec1ed53\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.595428 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-combined-ca-bundle\") pod \"83487d74-fdd0-493c-8b33-5de13ec1ed53\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.596202 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-config-data\") pod \"83487d74-fdd0-493c-8b33-5de13ec1ed53\" (UID: \"83487d74-fdd0-493c-8b33-5de13ec1ed53\") " Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.597009 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83487d74-fdd0-493c-8b33-5de13ec1ed53","Type":"ContainerDied","Data":"042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff"} Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.597008 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.597083 4880 scope.go:117] "RemoveContainer" containerID="042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.596948 4880 generic.go:334] "Generic (PLEG): container finished" podID="83487d74-fdd0-493c-8b33-5de13ec1ed53" containerID="042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff" exitCode=137 Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.597274 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83487d74-fdd0-493c-8b33-5de13ec1ed53","Type":"ContainerDied","Data":"0d104266e9a45c15a227f41616e0a38a2d2fbe62165bdddd7116f9b0dde7a6ba"} Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.608321 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83487d74-fdd0-493c-8b33-5de13ec1ed53-kube-api-access-xdmb5" (OuterVolumeSpecName: "kube-api-access-xdmb5") pod "83487d74-fdd0-493c-8b33-5de13ec1ed53" (UID: "83487d74-fdd0-493c-8b33-5de13ec1ed53"). InnerVolumeSpecName "kube-api-access-xdmb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.626756 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83487d74-fdd0-493c-8b33-5de13ec1ed53" (UID: "83487d74-fdd0-493c-8b33-5de13ec1ed53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.630217 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-config-data" (OuterVolumeSpecName: "config-data") pod "83487d74-fdd0-493c-8b33-5de13ec1ed53" (UID: "83487d74-fdd0-493c-8b33-5de13ec1ed53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.701842 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.701928 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdmb5\" (UniqueName: \"kubernetes.io/projected/83487d74-fdd0-493c-8b33-5de13ec1ed53-kube-api-access-xdmb5\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.701952 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83487d74-fdd0-493c-8b33-5de13ec1ed53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.722129 4880 scope.go:117] "RemoveContainer" containerID="042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff" Dec 01 03:17:02 crc kubenswrapper[4880]: E1201 03:17:02.723203 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff\": container with ID starting with 042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff not found: ID does not exist" containerID="042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.723397 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff"} err="failed to get container status \"042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff\": rpc error: code = NotFound desc = could not find container \"042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff\": container with ID starting with 042b3b135150e837cf2ce2557f38220603495edda6b7d6aa0c5d210a9b7bd8ff not found: ID does not exist" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.929966 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.950027 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.958791 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 03:17:02 crc kubenswrapper[4880]: E1201 03:17:02.959264 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83487d74-fdd0-493c-8b33-5de13ec1ed53" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.959284 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="83487d74-fdd0-493c-8b33-5de13ec1ed53" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.959778 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="83487d74-fdd0-493c-8b33-5de13ec1ed53" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.961857 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.968112 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.968205 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.968267 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 01 03:17:02 crc kubenswrapper[4880]: I1201 03:17:02.975984 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.005380 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.005515 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.005592 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcd5b\" (UniqueName: \"kubernetes.io/projected/6d82ad44-6d18-43d0-8bd0-cbc405db4877-kube-api-access-fcd5b\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.005638 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.005667 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.107690 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcd5b\" (UniqueName: \"kubernetes.io/projected/6d82ad44-6d18-43d0-8bd0-cbc405db4877-kube-api-access-fcd5b\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.108010 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.108051 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.108098 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.108200 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.114541 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.114669 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.125515 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.126329 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d82ad44-6d18-43d0-8bd0-cbc405db4877-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.136213 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcd5b\" (UniqueName: \"kubernetes.io/projected/6d82ad44-6d18-43d0-8bd0-cbc405db4877-kube-api-access-fcd5b\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d82ad44-6d18-43d0-8bd0-cbc405db4877\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.279916 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:03 crc kubenswrapper[4880]: I1201 03:17:03.784860 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 03:17:04 crc kubenswrapper[4880]: I1201 03:17:04.620203 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d82ad44-6d18-43d0-8bd0-cbc405db4877","Type":"ContainerStarted","Data":"ba60f1670d32fd9cfbc1992838aa8bedf5e349faca6b28a72f828f2cf0c1553a"} Dec 01 03:17:04 crc kubenswrapper[4880]: I1201 03:17:04.620550 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d82ad44-6d18-43d0-8bd0-cbc405db4877","Type":"ContainerStarted","Data":"e040b71c27baa41a91ef201ca55c15e445fdba14537b9a926178632913326c88"} Dec 01 03:17:04 crc kubenswrapper[4880]: I1201 03:17:04.658698 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.658676836 podStartE2EDuration="2.658676836s" podCreationTimestamp="2025-12-01 03:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:17:04.64734216 +0000 UTC m=+1254.158596542" watchObservedRunningTime="2025-12-01 03:17:04.658676836 +0000 UTC m=+1254.169931208" Dec 01 03:17:04 crc kubenswrapper[4880]: I1201 03:17:04.798476 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83487d74-fdd0-493c-8b33-5de13ec1ed53" path="/var/lib/kubelet/pods/83487d74-fdd0-493c-8b33-5de13ec1ed53/volumes" Dec 01 03:17:04 crc kubenswrapper[4880]: I1201 03:17:04.900819 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 03:17:04 crc kubenswrapper[4880]: I1201 03:17:04.902174 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 03:17:04 crc kubenswrapper[4880]: I1201 03:17:04.905441 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 03:17:04 crc kubenswrapper[4880]: I1201 03:17:04.907021 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.631219 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.635194 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.881504 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f7897ddc5-444fn"] Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.883124 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.889460 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f7897ddc5-444fn"] Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.968634 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-nb\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.968676 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-swift-storage-0\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.968700 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-sb\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.968728 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-config\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.968763 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgbbg\" (UniqueName: \"kubernetes.io/projected/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-kube-api-access-cgbbg\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:05 crc kubenswrapper[4880]: I1201 03:17:05.968828 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-svc\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.070000 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-nb\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.070046 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-swift-storage-0\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.070075 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-sb\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.070110 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-config\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.070138 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgbbg\" (UniqueName: \"kubernetes.io/projected/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-kube-api-access-cgbbg\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.070208 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-svc\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.071267 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-nb\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.071365 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-svc\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.071576 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-swift-storage-0\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.071701 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-config\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.071712 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-sb\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.107956 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgbbg\" (UniqueName: \"kubernetes.io/projected/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-kube-api-access-cgbbg\") pod \"dnsmasq-dns-6f7897ddc5-444fn\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.225820 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:06 crc kubenswrapper[4880]: I1201 03:17:06.687722 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f7897ddc5-444fn"] Dec 01 03:17:07 crc kubenswrapper[4880]: I1201 03:17:07.644419 4880 generic.go:334] "Generic (PLEG): container finished" podID="f40b8e9f-f0a2-41fb-9141-80262a6f64bb" containerID="9b2d3d079313049e00b83486dcb839620014dd5107516d883c7abd737aabcaf5" exitCode=0 Dec 01 03:17:07 crc kubenswrapper[4880]: I1201 03:17:07.644477 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" event={"ID":"f40b8e9f-f0a2-41fb-9141-80262a6f64bb","Type":"ContainerDied","Data":"9b2d3d079313049e00b83486dcb839620014dd5107516d883c7abd737aabcaf5"} Dec 01 03:17:07 crc kubenswrapper[4880]: I1201 03:17:07.644749 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" event={"ID":"f40b8e9f-f0a2-41fb-9141-80262a6f64bb","Type":"ContainerStarted","Data":"bf8f07c382cd3c31499c95f63e769019c17cf76f30749abe10379586dafd5577"} Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.222353 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.223062 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="ceilometer-central-agent" containerID="cri-o://4897d172d15570dd00cc7208f1b917d56f54fa32526a5806d52db8aecc76b9b5" gracePeriod=30 Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.223206 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="proxy-httpd" containerID="cri-o://79da9fa97173d91fbdf6d727d8ab6b4c82096a8cb38ad0a37dc6171d2c2a8bac" gracePeriod=30 Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.223252 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="sg-core" containerID="cri-o://3404d6776b7d34ab1d53bfc7a51cb49621d98362a84b19338432a2ff8895162b" gracePeriod=30 Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.223288 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="ceilometer-notification-agent" containerID="cri-o://d30a068ccbacd60d8ad826db90228433c3e3fb06ed1f1c5def7b2a6b98c285c6" gracePeriod=30 Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.239555 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": EOF" Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.280279 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.405840 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.672009 4880 generic.go:334] "Generic (PLEG): container finished" podID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerID="79da9fa97173d91fbdf6d727d8ab6b4c82096a8cb38ad0a37dc6171d2c2a8bac" exitCode=0 Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.672215 4880 generic.go:334] "Generic (PLEG): container finished" podID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerID="3404d6776b7d34ab1d53bfc7a51cb49621d98362a84b19338432a2ff8895162b" exitCode=2 Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.672223 4880 generic.go:334] "Generic (PLEG): container finished" podID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerID="4897d172d15570dd00cc7208f1b917d56f54fa32526a5806d52db8aecc76b9b5" exitCode=0 Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.672258 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerDied","Data":"79da9fa97173d91fbdf6d727d8ab6b4c82096a8cb38ad0a37dc6171d2c2a8bac"} Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.672282 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerDied","Data":"3404d6776b7d34ab1d53bfc7a51cb49621d98362a84b19338432a2ff8895162b"} Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.672291 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerDied","Data":"4897d172d15570dd00cc7208f1b917d56f54fa32526a5806d52db8aecc76b9b5"} Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.673750 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-log" containerID="cri-o://2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201" gracePeriod=30 Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.674683 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" event={"ID":"f40b8e9f-f0a2-41fb-9141-80262a6f64bb","Type":"ContainerStarted","Data":"1f22914ddc5d7595b5517c3afda303efadda1a4edd095190f5e2a98365228198"} Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.674708 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.674962 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-api" containerID="cri-o://9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69" gracePeriod=30 Dec 01 03:17:08 crc kubenswrapper[4880]: I1201 03:17:08.700442 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" podStartSLOduration=3.700423109 podStartE2EDuration="3.700423109s" podCreationTimestamp="2025-12-01 03:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:17:08.694241948 +0000 UTC m=+1258.205496320" watchObservedRunningTime="2025-12-01 03:17:08.700423109 +0000 UTC m=+1258.211677481" Dec 01 03:17:09 crc kubenswrapper[4880]: I1201 03:17:09.684006 4880 generic.go:334] "Generic (PLEG): container finished" podID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerID="2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201" exitCode=143 Dec 01 03:17:09 crc kubenswrapper[4880]: I1201 03:17:09.684953 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0fd4377-fc6f-43db-a0be-846fa266fe32","Type":"ContainerDied","Data":"2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201"} Dec 01 03:17:10 crc kubenswrapper[4880]: I1201 03:17:10.818568 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": dial tcp 10.217.0.202:3000: connect: connection refused" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.256596 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.304554 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-config-data\") pod \"e0fd4377-fc6f-43db-a0be-846fa266fe32\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.304638 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-combined-ca-bundle\") pod \"e0fd4377-fc6f-43db-a0be-846fa266fe32\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.304717 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fd4377-fc6f-43db-a0be-846fa266fe32-logs\") pod \"e0fd4377-fc6f-43db-a0be-846fa266fe32\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.304763 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdwd6\" (UniqueName: \"kubernetes.io/projected/e0fd4377-fc6f-43db-a0be-846fa266fe32-kube-api-access-jdwd6\") pod \"e0fd4377-fc6f-43db-a0be-846fa266fe32\" (UID: \"e0fd4377-fc6f-43db-a0be-846fa266fe32\") " Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.310309 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0fd4377-fc6f-43db-a0be-846fa266fe32-logs" (OuterVolumeSpecName: "logs") pod "e0fd4377-fc6f-43db-a0be-846fa266fe32" (UID: "e0fd4377-fc6f-43db-a0be-846fa266fe32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.324519 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fd4377-fc6f-43db-a0be-846fa266fe32-kube-api-access-jdwd6" (OuterVolumeSpecName: "kube-api-access-jdwd6") pod "e0fd4377-fc6f-43db-a0be-846fa266fe32" (UID: "e0fd4377-fc6f-43db-a0be-846fa266fe32"). InnerVolumeSpecName "kube-api-access-jdwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.358091 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0fd4377-fc6f-43db-a0be-846fa266fe32" (UID: "e0fd4377-fc6f-43db-a0be-846fa266fe32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.390501 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-config-data" (OuterVolumeSpecName: "config-data") pod "e0fd4377-fc6f-43db-a0be-846fa266fe32" (UID: "e0fd4377-fc6f-43db-a0be-846fa266fe32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.413985 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.414011 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0fd4377-fc6f-43db-a0be-846fa266fe32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.414022 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0fd4377-fc6f-43db-a0be-846fa266fe32-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.414031 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdwd6\" (UniqueName: \"kubernetes.io/projected/e0fd4377-fc6f-43db-a0be-846fa266fe32-kube-api-access-jdwd6\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.748302 4880 generic.go:334] "Generic (PLEG): container finished" podID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerID="9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69" exitCode=0 Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.748353 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0fd4377-fc6f-43db-a0be-846fa266fe32","Type":"ContainerDied","Data":"9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69"} Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.748360 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.748383 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0fd4377-fc6f-43db-a0be-846fa266fe32","Type":"ContainerDied","Data":"a7bd4d8ac4172b4776d6dbbf8a12d1be7255bd8ccdff1a4bc0726de91499df65"} Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.748405 4880 scope.go:117] "RemoveContainer" containerID="9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.785585 4880 scope.go:117] "RemoveContainer" containerID="2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.804732 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.810236 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.824302 4880 scope.go:117] "RemoveContainer" containerID="9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69" Dec 01 03:17:12 crc kubenswrapper[4880]: E1201 03:17:12.826003 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69\": container with ID starting with 9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69 not found: ID does not exist" containerID="9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.826073 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69"} err="failed to get container status \"9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69\": rpc error: code = NotFound desc = could not find container \"9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69\": container with ID starting with 9b0d232014dbcf706c89cb77d669ec7204bc14f029aa0d8023a9a580b1865e69 not found: ID does not exist" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.826103 4880 scope.go:117] "RemoveContainer" containerID="2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201" Dec 01 03:17:12 crc kubenswrapper[4880]: E1201 03:17:12.826494 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201\": container with ID starting with 2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201 not found: ID does not exist" containerID="2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.826521 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201"} err="failed to get container status \"2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201\": rpc error: code = NotFound desc = could not find container \"2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201\": container with ID starting with 2aede52f4db662c584f2a893857c3da7e977f2e5604befed96e03fc5a714b201 not found: ID does not exist" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.834779 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:12 crc kubenswrapper[4880]: E1201 03:17:12.835349 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-log" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.835365 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-log" Dec 01 03:17:12 crc kubenswrapper[4880]: E1201 03:17:12.835406 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-api" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.835413 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-api" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.835644 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-api" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.835659 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" containerName="nova-api-log" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.836840 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.842146 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.842301 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.842441 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.843311 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.924054 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f752a0e8-6b44-4ddf-9231-039426e5bb7e-logs\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.924117 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.924172 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-config-data\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.924222 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.924244 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvzr\" (UniqueName: \"kubernetes.io/projected/f752a0e8-6b44-4ddf-9231-039426e5bb7e-kube-api-access-msvzr\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:12 crc kubenswrapper[4880]: I1201 03:17:12.924264 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.025214 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-config-data\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.025290 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.025314 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msvzr\" (UniqueName: \"kubernetes.io/projected/f752a0e8-6b44-4ddf-9231-039426e5bb7e-kube-api-access-msvzr\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.025333 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.025371 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f752a0e8-6b44-4ddf-9231-039426e5bb7e-logs\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.025411 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.027101 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f752a0e8-6b44-4ddf-9231-039426e5bb7e-logs\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.030747 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.031298 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-public-tls-certs\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.032413 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.042650 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-config-data\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.043020 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msvzr\" (UniqueName: \"kubernetes.io/projected/f752a0e8-6b44-4ddf-9231-039426e5bb7e-kube-api-access-msvzr\") pod \"nova-api-0\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.165196 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.281037 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.333655 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.459026 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.763154 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f752a0e8-6b44-4ddf-9231-039426e5bb7e","Type":"ContainerStarted","Data":"662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c"} Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.763756 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f752a0e8-6b44-4ddf-9231-039426e5bb7e","Type":"ContainerStarted","Data":"bced722084975e268880a7204a0af3c57e4160d092ecefbfd82c028892ca96ad"} Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.768954 4880 generic.go:334] "Generic (PLEG): container finished" podID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerID="d30a068ccbacd60d8ad826db90228433c3e3fb06ed1f1c5def7b2a6b98c285c6" exitCode=0 Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.769005 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerDied","Data":"d30a068ccbacd60d8ad826db90228433c3e3fb06ed1f1c5def7b2a6b98c285c6"} Dec 01 03:17:13 crc kubenswrapper[4880]: I1201 03:17:13.792800 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.033621 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.038349 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sh8fk"] Dec 01 03:17:14 crc kubenswrapper[4880]: E1201 03:17:14.038778 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="proxy-httpd" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.038793 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="proxy-httpd" Dec 01 03:17:14 crc kubenswrapper[4880]: E1201 03:17:14.038819 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="sg-core" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.038825 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="sg-core" Dec 01 03:17:14 crc kubenswrapper[4880]: E1201 03:17:14.038841 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="ceilometer-notification-agent" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.038846 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="ceilometer-notification-agent" Dec 01 03:17:14 crc kubenswrapper[4880]: E1201 03:17:14.038855 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="ceilometer-central-agent" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.038861 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="ceilometer-central-agent" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.039045 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="ceilometer-notification-agent" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.039066 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="proxy-httpd" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.039074 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="ceilometer-central-agent" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.039081 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" containerName="sg-core" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.039746 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.048984 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sh8fk"] Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.086538 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-scripts\") pod \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.086674 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-log-httpd\") pod \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.086722 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-combined-ca-bundle\") pod \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.086746 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-sg-core-conf-yaml\") pod \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.086781 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-config-data\") pod \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.086918 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd9gr\" (UniqueName: \"kubernetes.io/projected/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-kube-api-access-pd9gr\") pod \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.086940 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-run-httpd\") pod \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\" (UID: \"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41\") " Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.087188 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.087292 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-config-data\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.087315 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k857l\" (UniqueName: \"kubernetes.io/projected/b861996e-b33f-4277-8c71-60d20d61bad8-kube-api-access-k857l\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.087345 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-scripts\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.088780 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" (UID: "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.089125 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" (UID: "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.095645 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.095817 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.107919 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-kube-api-access-pd9gr" (OuterVolumeSpecName: "kube-api-access-pd9gr") pod "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" (UID: "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41"). InnerVolumeSpecName "kube-api-access-pd9gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.124285 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-scripts" (OuterVolumeSpecName: "scripts") pod "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" (UID: "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.191521 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-scripts\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.191620 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.191710 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-config-data\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.191724 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k857l\" (UniqueName: \"kubernetes.io/projected/b861996e-b33f-4277-8c71-60d20d61bad8-kube-api-access-k857l\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.191776 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd9gr\" (UniqueName: \"kubernetes.io/projected/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-kube-api-access-pd9gr\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.191787 4880 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.191796 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.191804 4880 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.197559 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-config-data\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.212310 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" (UID: "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.212504 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k857l\" (UniqueName: \"kubernetes.io/projected/b861996e-b33f-4277-8c71-60d20d61bad8-kube-api-access-k857l\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.212893 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-scripts\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.213382 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sh8fk\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.273236 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.276515 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" (UID: "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.294386 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.295365 4880 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.307558 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-config-data" (OuterVolumeSpecName: "config-data") pod "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" (UID: "fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.399982 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.681973 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sh8fk"] Dec 01 03:17:14 crc kubenswrapper[4880]: W1201 03:17:14.686595 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb861996e_b33f_4277_8c71_60d20d61bad8.slice/crio-b21a4af8c4c6c9770b404ec04a3197fd9f553bd58fa895ecc437ac3cf07b26bb WatchSource:0}: Error finding container b21a4af8c4c6c9770b404ec04a3197fd9f553bd58fa895ecc437ac3cf07b26bb: Status 404 returned error can't find the container with id b21a4af8c4c6c9770b404ec04a3197fd9f553bd58fa895ecc437ac3cf07b26bb Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.783758 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.802967 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fd4377-fc6f-43db-a0be-846fa266fe32" path="/var/lib/kubelet/pods/e0fd4377-fc6f-43db-a0be-846fa266fe32/volumes" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.803594 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41","Type":"ContainerDied","Data":"24ab510c8c9d0c7b57a4c6046f9418328f8dd96e5e683dad81b0e85aac647d97"} Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.803622 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sh8fk" event={"ID":"b861996e-b33f-4277-8c71-60d20d61bad8","Type":"ContainerStarted","Data":"b21a4af8c4c6c9770b404ec04a3197fd9f553bd58fa895ecc437ac3cf07b26bb"} Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.803634 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f752a0e8-6b44-4ddf-9231-039426e5bb7e","Type":"ContainerStarted","Data":"280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f"} Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.803748 4880 scope.go:117] "RemoveContainer" containerID="79da9fa97173d91fbdf6d727d8ab6b4c82096a8cb38ad0a37dc6171d2c2a8bac" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.840336 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.840307393 podStartE2EDuration="2.840307393s" podCreationTimestamp="2025-12-01 03:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:17:14.828636708 +0000 UTC m=+1264.339891120" watchObservedRunningTime="2025-12-01 03:17:14.840307393 +0000 UTC m=+1264.351561805" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.850966 4880 scope.go:117] "RemoveContainer" containerID="3404d6776b7d34ab1d53bfc7a51cb49621d98362a84b19338432a2ff8895162b" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.880107 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.892748 4880 scope.go:117] "RemoveContainer" containerID="d30a068ccbacd60d8ad826db90228433c3e3fb06ed1f1c5def7b2a6b98c285c6" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.895578 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.909084 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.913144 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.916802 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.916983 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.917731 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:14 crc kubenswrapper[4880]: I1201 03:17:14.995792 4880 scope.go:117] "RemoveContainer" containerID="4897d172d15570dd00cc7208f1b917d56f54fa32526a5806d52db8aecc76b9b5" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.015781 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-run-httpd\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.015847 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.015924 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-log-httpd\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.015946 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-scripts\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.016035 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmllk\" (UniqueName: \"kubernetes.io/projected/f44572b9-ae8e-41eb-a937-90ea818187d9-kube-api-access-xmllk\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.016090 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.016226 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-config-data\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.117806 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.117971 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-config-data\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.118047 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-run-httpd\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.118067 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.118100 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-log-httpd\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.118148 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-scripts\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.118169 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmllk\" (UniqueName: \"kubernetes.io/projected/f44572b9-ae8e-41eb-a937-90ea818187d9-kube-api-access-xmllk\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.119076 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-log-httpd\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.121007 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-run-httpd\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.122576 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.125795 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-config-data\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.128905 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-scripts\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.135976 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.142713 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmllk\" (UniqueName: \"kubernetes.io/projected/f44572b9-ae8e-41eb-a937-90ea818187d9-kube-api-access-xmllk\") pod \"ceilometer-0\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.290542 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.720933 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.736571 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.815281 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sh8fk" event={"ID":"b861996e-b33f-4277-8c71-60d20d61bad8","Type":"ContainerStarted","Data":"8a80a684c3d3e4728f5a1f048e4520492094d4d1e4c81c677a984e56af3960fe"} Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.817275 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerStarted","Data":"16b0f55301ed0591be89df1a83c83269412e6d259242d8835aed8d229e27fc6a"} Dec 01 03:17:15 crc kubenswrapper[4880]: I1201 03:17:15.833965 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sh8fk" podStartSLOduration=1.833949941 podStartE2EDuration="1.833949941s" podCreationTimestamp="2025-12-01 03:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:17:15.831222035 +0000 UTC m=+1265.342476407" watchObservedRunningTime="2025-12-01 03:17:15.833949941 +0000 UTC m=+1265.345204313" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.228260 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.394070 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f5698bdfc-dct9n"] Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.394676 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" podUID="7c59659f-7a35-4df4-8816-4c48a175e7a4" containerName="dnsmasq-dns" containerID="cri-o://c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10" gracePeriod=10 Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.789182 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.794850 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41" path="/var/lib/kubelet/pods/fd0e2b06-d4d8-4f6b-a047-8bf63edd7a41/volumes" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848088 4880 generic.go:334] "Generic (PLEG): container finished" podID="7c59659f-7a35-4df4-8816-4c48a175e7a4" containerID="c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10" exitCode=0 Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848146 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" event={"ID":"7c59659f-7a35-4df4-8816-4c48a175e7a4","Type":"ContainerDied","Data":"c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10"} Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848172 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" event={"ID":"7c59659f-7a35-4df4-8816-4c48a175e7a4","Type":"ContainerDied","Data":"66f9dfd0ac2494775009862433d5e82992a5171bd42113eaec71e784e1bf5fa4"} Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848189 4880 scope.go:117] "RemoveContainer" containerID="c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848305 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5698bdfc-dct9n" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848399 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-swift-storage-0\") pod \"7c59659f-7a35-4df4-8816-4c48a175e7a4\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848481 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm8qm\" (UniqueName: \"kubernetes.io/projected/7c59659f-7a35-4df4-8816-4c48a175e7a4-kube-api-access-lm8qm\") pod \"7c59659f-7a35-4df4-8816-4c48a175e7a4\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848510 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-config\") pod \"7c59659f-7a35-4df4-8816-4c48a175e7a4\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848557 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-nb\") pod \"7c59659f-7a35-4df4-8816-4c48a175e7a4\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848574 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-svc\") pod \"7c59659f-7a35-4df4-8816-4c48a175e7a4\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.848660 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-sb\") pod \"7c59659f-7a35-4df4-8816-4c48a175e7a4\" (UID: \"7c59659f-7a35-4df4-8816-4c48a175e7a4\") " Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.853863 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerStarted","Data":"ca3241d1838c53389b916b369734dd7af3b1e06e1b4f8b154b985b01c306e88b"} Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.853917 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerStarted","Data":"e06fd97e967e8cdbfe09fb866c003975ee4e25ace627d0e6466debce41e49118"} Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.859422 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c59659f-7a35-4df4-8816-4c48a175e7a4-kube-api-access-lm8qm" (OuterVolumeSpecName: "kube-api-access-lm8qm") pod "7c59659f-7a35-4df4-8816-4c48a175e7a4" (UID: "7c59659f-7a35-4df4-8816-4c48a175e7a4"). InnerVolumeSpecName "kube-api-access-lm8qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.903108 4880 scope.go:117] "RemoveContainer" containerID="1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.951235 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm8qm\" (UniqueName: \"kubernetes.io/projected/7c59659f-7a35-4df4-8816-4c48a175e7a4-kube-api-access-lm8qm\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.981706 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c59659f-7a35-4df4-8816-4c48a175e7a4" (UID: "7c59659f-7a35-4df4-8816-4c48a175e7a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.983277 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c59659f-7a35-4df4-8816-4c48a175e7a4" (UID: "7c59659f-7a35-4df4-8816-4c48a175e7a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:17:16 crc kubenswrapper[4880]: I1201 03:17:16.988973 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c59659f-7a35-4df4-8816-4c48a175e7a4" (UID: "7c59659f-7a35-4df4-8816-4c48a175e7a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.006779 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c59659f-7a35-4df4-8816-4c48a175e7a4" (UID: "7c59659f-7a35-4df4-8816-4c48a175e7a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.014263 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-config" (OuterVolumeSpecName: "config") pod "7c59659f-7a35-4df4-8816-4c48a175e7a4" (UID: "7c59659f-7a35-4df4-8816-4c48a175e7a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.056013 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.056042 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.056052 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.056061 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.056069 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c59659f-7a35-4df4-8816-4c48a175e7a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.115550 4880 scope.go:117] "RemoveContainer" containerID="c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10" Dec 01 03:17:17 crc kubenswrapper[4880]: E1201 03:17:17.116006 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10\": container with ID starting with c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10 not found: ID does not exist" containerID="c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.116034 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10"} err="failed to get container status \"c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10\": rpc error: code = NotFound desc = could not find container \"c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10\": container with ID starting with c6f8d6be04fbe7118d35e955382a4cd1907b8c874a556dd28e0ff9d003cf6c10 not found: ID does not exist" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.116054 4880 scope.go:117] "RemoveContainer" containerID="1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b" Dec 01 03:17:17 crc kubenswrapper[4880]: E1201 03:17:17.119586 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b\": container with ID starting with 1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b not found: ID does not exist" containerID="1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.119615 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b"} err="failed to get container status \"1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b\": rpc error: code = NotFound desc = could not find container \"1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b\": container with ID starting with 1ed37fb7cd833b630db6ab8d3d671344a13ef39b42d39621dc63fee33cd2653b not found: ID does not exist" Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.193214 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f5698bdfc-dct9n"] Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.205368 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f5698bdfc-dct9n"] Dec 01 03:17:17 crc kubenswrapper[4880]: I1201 03:17:17.862948 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerStarted","Data":"7b93cd21c480092d45c10f0b12e1e40bb84a205f1ec5d6bd5f1515c9ff1dd3cc"} Dec 01 03:17:18 crc kubenswrapper[4880]: I1201 03:17:18.795565 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c59659f-7a35-4df4-8816-4c48a175e7a4" path="/var/lib/kubelet/pods/7c59659f-7a35-4df4-8816-4c48a175e7a4/volumes" Dec 01 03:17:18 crc kubenswrapper[4880]: I1201 03:17:18.875087 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerStarted","Data":"9fc7f02e30a47b8e06f9a239fad9c63bac87dec63e24e46ed54c17e2216a847d"} Dec 01 03:17:18 crc kubenswrapper[4880]: I1201 03:17:18.876218 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 03:17:18 crc kubenswrapper[4880]: I1201 03:17:18.892844 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.112374246 podStartE2EDuration="4.892822938s" podCreationTimestamp="2025-12-01 03:17:14 +0000 UTC" firstStartedPulling="2025-12-01 03:17:15.73636287 +0000 UTC m=+1265.247617242" lastFinishedPulling="2025-12-01 03:17:18.516811562 +0000 UTC m=+1268.028065934" observedRunningTime="2025-12-01 03:17:18.891427864 +0000 UTC m=+1268.402682236" watchObservedRunningTime="2025-12-01 03:17:18.892822938 +0000 UTC m=+1268.404077310" Dec 01 03:17:20 crc kubenswrapper[4880]: I1201 03:17:20.908415 4880 generic.go:334] "Generic (PLEG): container finished" podID="b861996e-b33f-4277-8c71-60d20d61bad8" containerID="8a80a684c3d3e4728f5a1f048e4520492094d4d1e4c81c677a984e56af3960fe" exitCode=0 Dec 01 03:17:20 crc kubenswrapper[4880]: I1201 03:17:20.909591 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sh8fk" event={"ID":"b861996e-b33f-4277-8c71-60d20d61bad8","Type":"ContainerDied","Data":"8a80a684c3d3e4728f5a1f048e4520492094d4d1e4c81c677a984e56af3960fe"} Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.253296 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.276218 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k857l\" (UniqueName: \"kubernetes.io/projected/b861996e-b33f-4277-8c71-60d20d61bad8-kube-api-access-k857l\") pod \"b861996e-b33f-4277-8c71-60d20d61bad8\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.276828 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-scripts\") pod \"b861996e-b33f-4277-8c71-60d20d61bad8\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.277183 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-config-data\") pod \"b861996e-b33f-4277-8c71-60d20d61bad8\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.277368 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-combined-ca-bundle\") pod \"b861996e-b33f-4277-8c71-60d20d61bad8\" (UID: \"b861996e-b33f-4277-8c71-60d20d61bad8\") " Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.287107 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-scripts" (OuterVolumeSpecName: "scripts") pod "b861996e-b33f-4277-8c71-60d20d61bad8" (UID: "b861996e-b33f-4277-8c71-60d20d61bad8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.297734 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b861996e-b33f-4277-8c71-60d20d61bad8-kube-api-access-k857l" (OuterVolumeSpecName: "kube-api-access-k857l") pod "b861996e-b33f-4277-8c71-60d20d61bad8" (UID: "b861996e-b33f-4277-8c71-60d20d61bad8"). InnerVolumeSpecName "kube-api-access-k857l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.314991 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-config-data" (OuterVolumeSpecName: "config-data") pod "b861996e-b33f-4277-8c71-60d20d61bad8" (UID: "b861996e-b33f-4277-8c71-60d20d61bad8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.319962 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b861996e-b33f-4277-8c71-60d20d61bad8" (UID: "b861996e-b33f-4277-8c71-60d20d61bad8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.380163 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k857l\" (UniqueName: \"kubernetes.io/projected/b861996e-b33f-4277-8c71-60d20d61bad8-kube-api-access-k857l\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.380435 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.380542 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.380631 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b861996e-b33f-4277-8c71-60d20d61bad8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.938707 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sh8fk" event={"ID":"b861996e-b33f-4277-8c71-60d20d61bad8","Type":"ContainerDied","Data":"b21a4af8c4c6c9770b404ec04a3197fd9f553bd58fa895ecc437ac3cf07b26bb"} Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.938750 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21a4af8c4c6c9770b404ec04a3197fd9f553bd58fa895ecc437ac3cf07b26bb" Dec 01 03:17:22 crc kubenswrapper[4880]: I1201 03:17:22.938782 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sh8fk" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.121676 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.121957 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerName="nova-api-log" containerID="cri-o://662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c" gracePeriod=30 Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.122375 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerName="nova-api-api" containerID="cri-o://280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f" gracePeriod=30 Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.140589 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.141006 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" containerName="nova-scheduler-scheduler" containerID="cri-o://25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de" gracePeriod=30 Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.162182 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.162611 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-log" containerID="cri-o://f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f" gracePeriod=30 Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.163496 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-metadata" containerID="cri-o://c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286" gracePeriod=30 Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.691992 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.815756 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msvzr\" (UniqueName: \"kubernetes.io/projected/f752a0e8-6b44-4ddf-9231-039426e5bb7e-kube-api-access-msvzr\") pod \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.815881 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-internal-tls-certs\") pod \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.815949 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f752a0e8-6b44-4ddf-9231-039426e5bb7e-logs\") pod \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.816020 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-config-data\") pod \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.816070 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-public-tls-certs\") pod \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.816097 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-combined-ca-bundle\") pod \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\" (UID: \"f752a0e8-6b44-4ddf-9231-039426e5bb7e\") " Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.816516 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f752a0e8-6b44-4ddf-9231-039426e5bb7e-logs" (OuterVolumeSpecName: "logs") pod "f752a0e8-6b44-4ddf-9231-039426e5bb7e" (UID: "f752a0e8-6b44-4ddf-9231-039426e5bb7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.816806 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f752a0e8-6b44-4ddf-9231-039426e5bb7e-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.822307 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f752a0e8-6b44-4ddf-9231-039426e5bb7e-kube-api-access-msvzr" (OuterVolumeSpecName: "kube-api-access-msvzr") pod "f752a0e8-6b44-4ddf-9231-039426e5bb7e" (UID: "f752a0e8-6b44-4ddf-9231-039426e5bb7e"). InnerVolumeSpecName "kube-api-access-msvzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.850176 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-config-data" (OuterVolumeSpecName: "config-data") pod "f752a0e8-6b44-4ddf-9231-039426e5bb7e" (UID: "f752a0e8-6b44-4ddf-9231-039426e5bb7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.851918 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f752a0e8-6b44-4ddf-9231-039426e5bb7e" (UID: "f752a0e8-6b44-4ddf-9231-039426e5bb7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.871963 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f752a0e8-6b44-4ddf-9231-039426e5bb7e" (UID: "f752a0e8-6b44-4ddf-9231-039426e5bb7e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.872845 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f752a0e8-6b44-4ddf-9231-039426e5bb7e" (UID: "f752a0e8-6b44-4ddf-9231-039426e5bb7e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.918658 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msvzr\" (UniqueName: \"kubernetes.io/projected/f752a0e8-6b44-4ddf-9231-039426e5bb7e-kube-api-access-msvzr\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.919019 4880 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.919035 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.919044 4880 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.919054 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f752a0e8-6b44-4ddf-9231-039426e5bb7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.953221 4880 generic.go:334] "Generic (PLEG): container finished" podID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerID="280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f" exitCode=0 Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.953259 4880 generic.go:334] "Generic (PLEG): container finished" podID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerID="662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c" exitCode=143 Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.953296 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f752a0e8-6b44-4ddf-9231-039426e5bb7e","Type":"ContainerDied","Data":"280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f"} Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.953320 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f752a0e8-6b44-4ddf-9231-039426e5bb7e","Type":"ContainerDied","Data":"662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c"} Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.953332 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f752a0e8-6b44-4ddf-9231-039426e5bb7e","Type":"ContainerDied","Data":"bced722084975e268880a7204a0af3c57e4160d092ecefbfd82c028892ca96ad"} Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.953349 4880 scope.go:117] "RemoveContainer" containerID="280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.953521 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.957176 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerID="f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f" exitCode=143 Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.957217 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e9b3c27-b580-4a32-88ce-ada9ffb57f79","Type":"ContainerDied","Data":"f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f"} Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.993987 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:23 crc kubenswrapper[4880]: I1201 03:17:23.996187 4880 scope.go:117] "RemoveContainer" containerID="662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.014908 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.017108 4880 scope.go:117] "RemoveContainer" containerID="280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f" Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.017549 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f\": container with ID starting with 280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f not found: ID does not exist" containerID="280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.017596 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f"} err="failed to get container status \"280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f\": rpc error: code = NotFound desc = could not find container \"280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f\": container with ID starting with 280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f not found: ID does not exist" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.017621 4880 scope.go:117] "RemoveContainer" containerID="662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c" Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.017936 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c\": container with ID starting with 662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c not found: ID does not exist" containerID="662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.017975 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c"} err="failed to get container status \"662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c\": rpc error: code = NotFound desc = could not find container \"662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c\": container with ID starting with 662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c not found: ID does not exist" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.018001 4880 scope.go:117] "RemoveContainer" containerID="280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.018388 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f"} err="failed to get container status \"280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f\": rpc error: code = NotFound desc = could not find container \"280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f\": container with ID starting with 280e82826c96126f5a7f7b10660eada69001f292e0855e2d4b853534436c955f not found: ID does not exist" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.018409 4880 scope.go:117] "RemoveContainer" containerID="662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.021920 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c"} err="failed to get container status \"662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c\": rpc error: code = NotFound desc = could not find container \"662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c\": container with ID starting with 662f570693033dac45db6c1cef57e1d102b88819ef77ad2cea3b7dc68aac904c not found: ID does not exist" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.029102 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.029625 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c59659f-7a35-4df4-8816-4c48a175e7a4" containerName="init" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.029657 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c59659f-7a35-4df4-8816-4c48a175e7a4" containerName="init" Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.029684 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerName="nova-api-log" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.029692 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerName="nova-api-log" Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.029703 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b861996e-b33f-4277-8c71-60d20d61bad8" containerName="nova-manage" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.029712 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b861996e-b33f-4277-8c71-60d20d61bad8" containerName="nova-manage" Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.029744 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c59659f-7a35-4df4-8816-4c48a175e7a4" containerName="dnsmasq-dns" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.029753 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c59659f-7a35-4df4-8816-4c48a175e7a4" containerName="dnsmasq-dns" Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.029780 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerName="nova-api-api" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.029787 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerName="nova-api-api" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.030016 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="b861996e-b33f-4277-8c71-60d20d61bad8" containerName="nova-manage" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.030046 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerName="nova-api-log" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.030062 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" containerName="nova-api-api" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.030078 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c59659f-7a35-4df4-8816-4c48a175e7a4" containerName="dnsmasq-dns" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.031293 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.037706 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.037856 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.037989 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.039482 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.224260 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.224319 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-config-data\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.224419 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-public-tls-certs\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.224458 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abac88f8-21ed-4c58-b58f-d7c15125bbae-logs\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.224479 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfzn9\" (UniqueName: \"kubernetes.io/projected/abac88f8-21ed-4c58-b58f-d7c15125bbae-kube-api-access-vfzn9\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.224536 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.326260 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-public-tls-certs\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.326329 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abac88f8-21ed-4c58-b58f-d7c15125bbae-logs\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.326354 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfzn9\" (UniqueName: \"kubernetes.io/projected/abac88f8-21ed-4c58-b58f-d7c15125bbae-kube-api-access-vfzn9\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.326413 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.326445 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.326467 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-config-data\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.327108 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abac88f8-21ed-4c58-b58f-d7c15125bbae-logs\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.329979 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.330719 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-config-data\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.332478 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.332625 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abac88f8-21ed-4c58-b58f-d7c15125bbae-public-tls-certs\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.342003 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfzn9\" (UniqueName: \"kubernetes.io/projected/abac88f8-21ed-4c58-b58f-d7c15125bbae-kube-api-access-vfzn9\") pod \"nova-api-0\" (UID: \"abac88f8-21ed-4c58-b58f-d7c15125bbae\") " pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.384937 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.794845 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f752a0e8-6b44-4ddf-9231-039426e5bb7e" path="/var/lib/kubelet/pods/f752a0e8-6b44-4ddf-9231-039426e5bb7e/volumes" Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.820674 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.821978 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.823451 4880 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 03:17:24 crc kubenswrapper[4880]: E1201 03:17:24.823500 4880 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" containerName="nova-scheduler-scheduler" Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.836454 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 03:17:24 crc kubenswrapper[4880]: I1201 03:17:24.970596 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abac88f8-21ed-4c58-b58f-d7c15125bbae","Type":"ContainerStarted","Data":"2fb5b4f125f3aa72efbe0dd3a70915d0cb9356228f14b4407dbd7030b00488cf"} Dec 01 03:17:25 crc kubenswrapper[4880]: I1201 03:17:25.985561 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abac88f8-21ed-4c58-b58f-d7c15125bbae","Type":"ContainerStarted","Data":"5dfdf9ef28a8559245a40f66ed6deef866e6b69e843b856985bce76fdcfe207b"} Dec 01 03:17:25 crc kubenswrapper[4880]: I1201 03:17:25.985994 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abac88f8-21ed-4c58-b58f-d7c15125bbae","Type":"ContainerStarted","Data":"42c2ff23ae084629cd8fd88a3cac0c645fb9edd4e0e66be217c16a17190494ca"} Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.021789 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.021762288 podStartE2EDuration="3.021762288s" podCreationTimestamp="2025-12-01 03:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:17:26.011789865 +0000 UTC m=+1275.523044297" watchObservedRunningTime="2025-12-01 03:17:26.021762288 +0000 UTC m=+1275.533016680" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.302564 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:48138->10.217.0.203:8775: read: connection reset by peer" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.302637 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:48134->10.217.0.203:8775: read: connection reset by peer" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.791146 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.832793 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-nova-metadata-tls-certs\") pod \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.832896 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-combined-ca-bundle\") pod \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.832984 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wd5f\" (UniqueName: \"kubernetes.io/projected/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-kube-api-access-9wd5f\") pod \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.833079 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-logs\") pod \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.833230 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-config-data\") pod \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\" (UID: \"9e9b3c27-b580-4a32-88ce-ada9ffb57f79\") " Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.848272 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-logs" (OuterVolumeSpecName: "logs") pod "9e9b3c27-b580-4a32-88ce-ada9ffb57f79" (UID: "9e9b3c27-b580-4a32-88ce-ada9ffb57f79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.872165 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-kube-api-access-9wd5f" (OuterVolumeSpecName: "kube-api-access-9wd5f") pod "9e9b3c27-b580-4a32-88ce-ada9ffb57f79" (UID: "9e9b3c27-b580-4a32-88ce-ada9ffb57f79"). InnerVolumeSpecName "kube-api-access-9wd5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.876042 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e9b3c27-b580-4a32-88ce-ada9ffb57f79" (UID: "9e9b3c27-b580-4a32-88ce-ada9ffb57f79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.887025 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-config-data" (OuterVolumeSpecName: "config-data") pod "9e9b3c27-b580-4a32-88ce-ada9ffb57f79" (UID: "9e9b3c27-b580-4a32-88ce-ada9ffb57f79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.926626 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9e9b3c27-b580-4a32-88ce-ada9ffb57f79" (UID: "9e9b3c27-b580-4a32-88ce-ada9ffb57f79"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.936510 4880 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.936544 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.936553 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wd5f\" (UniqueName: \"kubernetes.io/projected/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-kube-api-access-9wd5f\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.936562 4880 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-logs\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.936570 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9b3c27-b580-4a32-88ce-ada9ffb57f79-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.998156 4880 generic.go:334] "Generic (PLEG): container finished" podID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerID="c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286" exitCode=0 Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.998577 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e9b3c27-b580-4a32-88ce-ada9ffb57f79","Type":"ContainerDied","Data":"c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286"} Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.998625 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.998656 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9e9b3c27-b580-4a32-88ce-ada9ffb57f79","Type":"ContainerDied","Data":"c00bf9c534c61b44290f4c1d7b70c3d2e9fdc3b4aff8d166c03ba41fb601f420"} Dec 01 03:17:26 crc kubenswrapper[4880]: I1201 03:17:26.998679 4880 scope.go:117] "RemoveContainer" containerID="c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.034396 4880 scope.go:117] "RemoveContainer" containerID="f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.049952 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.064719 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.073921 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:17:27 crc kubenswrapper[4880]: E1201 03:17:27.074451 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-log" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.074471 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-log" Dec 01 03:17:27 crc kubenswrapper[4880]: E1201 03:17:27.074487 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-metadata" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.074493 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-metadata" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.074670 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-log" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.074721 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" containerName="nova-metadata-metadata" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.075716 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.082034 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.084115 4880 scope.go:117] "RemoveContainer" containerID="c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.084416 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 03:17:27 crc kubenswrapper[4880]: E1201 03:17:27.084633 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286\": container with ID starting with c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286 not found: ID does not exist" containerID="c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.084675 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286"} err="failed to get container status \"c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286\": rpc error: code = NotFound desc = could not find container \"c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286\": container with ID starting with c3749534726042131b9a4b13b423640d5c2940f172c0a3550d9d64e2b9928286 not found: ID does not exist" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.084703 4880 scope.go:117] "RemoveContainer" containerID="f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f" Dec 01 03:17:27 crc kubenswrapper[4880]: E1201 03:17:27.085124 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f\": container with ID starting with f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f not found: ID does not exist" containerID="f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.085152 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f"} err="failed to get container status \"f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f\": rpc error: code = NotFound desc = could not find container \"f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f\": container with ID starting with f3a9a05567ff446e662265befbb7431aecfa7981b6eec63a86e131c8dbab215f not found: ID does not exist" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.088399 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.141087 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3305837-2118-4358-a4b2-4267273a5907-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.141176 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3305837-2118-4358-a4b2-4267273a5907-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.141207 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3305837-2118-4358-a4b2-4267273a5907-config-data\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.141315 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvkwc\" (UniqueName: \"kubernetes.io/projected/d3305837-2118-4358-a4b2-4267273a5907-kube-api-access-cvkwc\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.141349 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3305837-2118-4358-a4b2-4267273a5907-logs\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.243052 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3305837-2118-4358-a4b2-4267273a5907-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.243342 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3305837-2118-4358-a4b2-4267273a5907-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.243364 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3305837-2118-4358-a4b2-4267273a5907-config-data\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.243449 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvkwc\" (UniqueName: \"kubernetes.io/projected/d3305837-2118-4358-a4b2-4267273a5907-kube-api-access-cvkwc\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.243469 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3305837-2118-4358-a4b2-4267273a5907-logs\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.243794 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3305837-2118-4358-a4b2-4267273a5907-logs\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.247696 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3305837-2118-4358-a4b2-4267273a5907-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.247769 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3305837-2118-4358-a4b2-4267273a5907-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.248159 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3305837-2118-4358-a4b2-4267273a5907-config-data\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.262640 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvkwc\" (UniqueName: \"kubernetes.io/projected/d3305837-2118-4358-a4b2-4267273a5907-kube-api-access-cvkwc\") pod \"nova-metadata-0\" (UID: \"d3305837-2118-4358-a4b2-4267273a5907\") " pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.404250 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 03:17:27 crc kubenswrapper[4880]: W1201 03:17:27.925728 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3305837_2118_4358_a4b2_4267273a5907.slice/crio-c56bd7bff8fc5b6182d17165aff135c14d3597dabf906ed566e257910b6f1b9a WatchSource:0}: Error finding container c56bd7bff8fc5b6182d17165aff135c14d3597dabf906ed566e257910b6f1b9a: Status 404 returned error can't find the container with id c56bd7bff8fc5b6182d17165aff135c14d3597dabf906ed566e257910b6f1b9a Dec 01 03:17:27 crc kubenswrapper[4880]: I1201 03:17:27.929523 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 03:17:28 crc kubenswrapper[4880]: I1201 03:17:28.016664 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3305837-2118-4358-a4b2-4267273a5907","Type":"ContainerStarted","Data":"c56bd7bff8fc5b6182d17165aff135c14d3597dabf906ed566e257910b6f1b9a"} Dec 01 03:17:28 crc kubenswrapper[4880]: I1201 03:17:28.798362 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b3c27-b580-4a32-88ce-ada9ffb57f79" path="/var/lib/kubelet/pods/9e9b3c27-b580-4a32-88ce-ada9ffb57f79/volumes" Dec 01 03:17:28 crc kubenswrapper[4880]: I1201 03:17:28.909333 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:17:28 crc kubenswrapper[4880]: I1201 03:17:28.984112 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-config-data\") pod \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " Dec 01 03:17:28 crc kubenswrapper[4880]: I1201 03:17:28.984212 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blm2p\" (UniqueName: \"kubernetes.io/projected/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-kube-api-access-blm2p\") pod \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " Dec 01 03:17:28 crc kubenswrapper[4880]: I1201 03:17:28.984264 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-combined-ca-bundle\") pod \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\" (UID: \"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3\") " Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.012616 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-kube-api-access-blm2p" (OuterVolumeSpecName: "kube-api-access-blm2p") pod "4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" (UID: "4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3"). InnerVolumeSpecName "kube-api-access-blm2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.022230 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-config-data" (OuterVolumeSpecName: "config-data") pod "4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" (UID: "4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.023348 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" (UID: "4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.031038 4880 generic.go:334] "Generic (PLEG): container finished" podID="4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" containerID="25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de" exitCode=0 Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.031104 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3","Type":"ContainerDied","Data":"25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de"} Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.031133 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3","Type":"ContainerDied","Data":"809f323702e4e7947da45606cbd0d6f869a6df706e08778500f5dffd7f8a8e2f"} Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.031152 4880 scope.go:117] "RemoveContainer" containerID="25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.031264 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.045177 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3305837-2118-4358-a4b2-4267273a5907","Type":"ContainerStarted","Data":"ec84ada7171bff08876a56bb2e3760087076debe606a9b864ab161f2562a5250"} Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.045232 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3305837-2118-4358-a4b2-4267273a5907","Type":"ContainerStarted","Data":"92932774dc3b3a6001f4bdbddf5b95dd7964770ac6ffa7ee454469b9bf3857d4"} Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.062685 4880 scope.go:117] "RemoveContainer" containerID="25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de" Dec 01 03:17:29 crc kubenswrapper[4880]: E1201 03:17:29.063247 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de\": container with ID starting with 25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de not found: ID does not exist" containerID="25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.063289 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de"} err="failed to get container status \"25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de\": rpc error: code = NotFound desc = could not find container \"25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de\": container with ID starting with 25cdb5ca2b417b6e7f2ba8776bf01582594d23a49a34e6a496c2628492b943de not found: ID does not exist" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.083505 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.083484715 podStartE2EDuration="2.083484715s" podCreationTimestamp="2025-12-01 03:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:17:29.070349624 +0000 UTC m=+1278.581604026" watchObservedRunningTime="2025-12-01 03:17:29.083484715 +0000 UTC m=+1278.594739087" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.086857 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.086908 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blm2p\" (UniqueName: \"kubernetes.io/projected/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-kube-api-access-blm2p\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.086923 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.090200 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.098721 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.116515 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:17:29 crc kubenswrapper[4880]: E1201 03:17:29.117125 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" containerName="nova-scheduler-scheduler" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.117216 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" containerName="nova-scheduler-scheduler" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.117455 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" containerName="nova-scheduler-scheduler" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.118183 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.122692 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.131517 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.188645 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6jb\" (UniqueName: \"kubernetes.io/projected/c35d0850-f387-4eba-8a7e-71743d4ea248-kube-api-access-nn6jb\") pod \"nova-scheduler-0\" (UID: \"c35d0850-f387-4eba-8a7e-71743d4ea248\") " pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.188702 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35d0850-f387-4eba-8a7e-71743d4ea248-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c35d0850-f387-4eba-8a7e-71743d4ea248\") " pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.188848 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35d0850-f387-4eba-8a7e-71743d4ea248-config-data\") pod \"nova-scheduler-0\" (UID: \"c35d0850-f387-4eba-8a7e-71743d4ea248\") " pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.291085 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35d0850-f387-4eba-8a7e-71743d4ea248-config-data\") pod \"nova-scheduler-0\" (UID: \"c35d0850-f387-4eba-8a7e-71743d4ea248\") " pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.291200 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn6jb\" (UniqueName: \"kubernetes.io/projected/c35d0850-f387-4eba-8a7e-71743d4ea248-kube-api-access-nn6jb\") pod \"nova-scheduler-0\" (UID: \"c35d0850-f387-4eba-8a7e-71743d4ea248\") " pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.291233 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35d0850-f387-4eba-8a7e-71743d4ea248-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c35d0850-f387-4eba-8a7e-71743d4ea248\") " pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.297558 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35d0850-f387-4eba-8a7e-71743d4ea248-config-data\") pod \"nova-scheduler-0\" (UID: \"c35d0850-f387-4eba-8a7e-71743d4ea248\") " pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.298060 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35d0850-f387-4eba-8a7e-71743d4ea248-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c35d0850-f387-4eba-8a7e-71743d4ea248\") " pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.305216 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn6jb\" (UniqueName: \"kubernetes.io/projected/c35d0850-f387-4eba-8a7e-71743d4ea248-kube-api-access-nn6jb\") pod \"nova-scheduler-0\" (UID: \"c35d0850-f387-4eba-8a7e-71743d4ea248\") " pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.437064 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 03:17:29 crc kubenswrapper[4880]: I1201 03:17:29.934747 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 03:17:30 crc kubenswrapper[4880]: I1201 03:17:30.063688 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c35d0850-f387-4eba-8a7e-71743d4ea248","Type":"ContainerStarted","Data":"430799c31e06344f5623f36325ba8987706db98f3ce3c42a7473e508dba0ce9e"} Dec 01 03:17:30 crc kubenswrapper[4880]: I1201 03:17:30.819016 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3" path="/var/lib/kubelet/pods/4e4bbdb1-ed64-48c0-b255-5a3f74e73ad3/volumes" Dec 01 03:17:31 crc kubenswrapper[4880]: I1201 03:17:31.082093 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c35d0850-f387-4eba-8a7e-71743d4ea248","Type":"ContainerStarted","Data":"a081f9817c83cb5150e760705071930e4490a3056e1148c121f7fd55d8969f33"} Dec 01 03:17:31 crc kubenswrapper[4880]: I1201 03:17:31.120746 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.120714601 podStartE2EDuration="2.120714601s" podCreationTimestamp="2025-12-01 03:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:17:31.106584916 +0000 UTC m=+1280.617839328" watchObservedRunningTime="2025-12-01 03:17:31.120714601 +0000 UTC m=+1280.631969013" Dec 01 03:17:32 crc kubenswrapper[4880]: I1201 03:17:32.405353 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 03:17:32 crc kubenswrapper[4880]: I1201 03:17:32.406565 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 03:17:34 crc kubenswrapper[4880]: I1201 03:17:34.385279 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 03:17:34 crc kubenswrapper[4880]: I1201 03:17:34.385692 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 03:17:34 crc kubenswrapper[4880]: I1201 03:17:34.437817 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 03:17:35 crc kubenswrapper[4880]: I1201 03:17:35.406130 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="abac88f8-21ed-4c58-b58f-d7c15125bbae" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:17:35 crc kubenswrapper[4880]: I1201 03:17:35.406169 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="abac88f8-21ed-4c58-b58f-d7c15125bbae" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:17:37 crc kubenswrapper[4880]: I1201 03:17:37.405107 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 03:17:37 crc kubenswrapper[4880]: I1201 03:17:37.405173 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 03:17:38 crc kubenswrapper[4880]: I1201 03:17:38.419182 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d3305837-2118-4358-a4b2-4267273a5907" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:17:38 crc kubenswrapper[4880]: I1201 03:17:38.419209 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d3305837-2118-4358-a4b2-4267273a5907" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 03:17:39 crc kubenswrapper[4880]: I1201 03:17:39.437586 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 03:17:39 crc kubenswrapper[4880]: I1201 03:17:39.481478 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 03:17:40 crc kubenswrapper[4880]: I1201 03:17:40.227558 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 03:17:44 crc kubenswrapper[4880]: I1201 03:17:44.402467 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 03:17:44 crc kubenswrapper[4880]: I1201 03:17:44.403353 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 03:17:44 crc kubenswrapper[4880]: I1201 03:17:44.405662 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 03:17:44 crc kubenswrapper[4880]: I1201 03:17:44.416509 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 03:17:45 crc kubenswrapper[4880]: I1201 03:17:45.229530 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 03:17:45 crc kubenswrapper[4880]: I1201 03:17:45.248766 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 03:17:45 crc kubenswrapper[4880]: I1201 03:17:45.314357 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 03:17:47 crc kubenswrapper[4880]: I1201 03:17:47.369745 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:17:47 crc kubenswrapper[4880]: I1201 03:17:47.369827 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:17:47 crc kubenswrapper[4880]: I1201 03:17:47.416974 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 03:17:47 crc kubenswrapper[4880]: I1201 03:17:47.417141 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 03:17:47 crc kubenswrapper[4880]: I1201 03:17:47.433557 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 03:17:47 crc kubenswrapper[4880]: I1201 03:17:47.444626 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 03:17:49 crc kubenswrapper[4880]: I1201 03:17:49.191094 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 03:17:49 crc kubenswrapper[4880]: I1201 03:17:49.191502 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3d04ac9c-b88e-44b8-92a6-293f737a6390" containerName="kube-state-metrics" containerID="cri-o://10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac" gracePeriod=30 Dec 01 03:17:49 crc kubenswrapper[4880]: I1201 03:17:49.669028 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 03:17:49 crc kubenswrapper[4880]: I1201 03:17:49.766744 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpq69\" (UniqueName: \"kubernetes.io/projected/3d04ac9c-b88e-44b8-92a6-293f737a6390-kube-api-access-mpq69\") pod \"3d04ac9c-b88e-44b8-92a6-293f737a6390\" (UID: \"3d04ac9c-b88e-44b8-92a6-293f737a6390\") " Dec 01 03:17:49 crc kubenswrapper[4880]: I1201 03:17:49.773630 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d04ac9c-b88e-44b8-92a6-293f737a6390-kube-api-access-mpq69" (OuterVolumeSpecName: "kube-api-access-mpq69") pod "3d04ac9c-b88e-44b8-92a6-293f737a6390" (UID: "3d04ac9c-b88e-44b8-92a6-293f737a6390"). InnerVolumeSpecName "kube-api-access-mpq69". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:49 crc kubenswrapper[4880]: I1201 03:17:49.869247 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpq69\" (UniqueName: \"kubernetes.io/projected/3d04ac9c-b88e-44b8-92a6-293f737a6390-kube-api-access-mpq69\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.286542 4880 generic.go:334] "Generic (PLEG): container finished" podID="3d04ac9c-b88e-44b8-92a6-293f737a6390" containerID="10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac" exitCode=2 Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.286581 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3d04ac9c-b88e-44b8-92a6-293f737a6390","Type":"ContainerDied","Data":"10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac"} Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.286611 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3d04ac9c-b88e-44b8-92a6-293f737a6390","Type":"ContainerDied","Data":"cba3ade1ef7bf5640d5786478d9ab9730672fe5df79fe7ce5fe26606f489debd"} Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.286629 4880 scope.go:117] "RemoveContainer" containerID="10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.286661 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.316864 4880 scope.go:117] "RemoveContainer" containerID="10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac" Dec 01 03:17:50 crc kubenswrapper[4880]: E1201 03:17:50.318139 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac\": container with ID starting with 10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac not found: ID does not exist" containerID="10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.318212 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac"} err="failed to get container status \"10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac\": rpc error: code = NotFound desc = could not find container \"10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac\": container with ID starting with 10dae80e341a6d715c3fea04cedab8cd69613509456ba43c26ba8a59ae61d6ac not found: ID does not exist" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.342043 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.357694 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.372835 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 03:17:50 crc kubenswrapper[4880]: E1201 03:17:50.373630 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d04ac9c-b88e-44b8-92a6-293f737a6390" containerName="kube-state-metrics" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.373674 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d04ac9c-b88e-44b8-92a6-293f737a6390" containerName="kube-state-metrics" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.374109 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d04ac9c-b88e-44b8-92a6-293f737a6390" containerName="kube-state-metrics" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.375527 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.385031 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.385309 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.386242 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.480849 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa44b977-5a65-4beb-8a82-de3f5a813bd9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.482453 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa44b977-5a65-4beb-8a82-de3f5a813bd9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.482598 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99sh\" (UniqueName: \"kubernetes.io/projected/fa44b977-5a65-4beb-8a82-de3f5a813bd9-kube-api-access-l99sh\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.482720 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa44b977-5a65-4beb-8a82-de3f5a813bd9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.584710 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa44b977-5a65-4beb-8a82-de3f5a813bd9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.585052 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99sh\" (UniqueName: \"kubernetes.io/projected/fa44b977-5a65-4beb-8a82-de3f5a813bd9-kube-api-access-l99sh\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.585161 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa44b977-5a65-4beb-8a82-de3f5a813bd9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.585329 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa44b977-5a65-4beb-8a82-de3f5a813bd9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.590086 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa44b977-5a65-4beb-8a82-de3f5a813bd9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.591602 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa44b977-5a65-4beb-8a82-de3f5a813bd9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.592006 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa44b977-5a65-4beb-8a82-de3f5a813bd9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.605790 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99sh\" (UniqueName: \"kubernetes.io/projected/fa44b977-5a65-4beb-8a82-de3f5a813bd9-kube-api-access-l99sh\") pod \"kube-state-metrics-0\" (UID: \"fa44b977-5a65-4beb-8a82-de3f5a813bd9\") " pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.701017 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 03:17:50 crc kubenswrapper[4880]: I1201 03:17:50.797702 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d04ac9c-b88e-44b8-92a6-293f737a6390" path="/var/lib/kubelet/pods/3d04ac9c-b88e-44b8-92a6-293f737a6390/volumes" Dec 01 03:17:51 crc kubenswrapper[4880]: I1201 03:17:51.124155 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:51 crc kubenswrapper[4880]: I1201 03:17:51.124707 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="ceilometer-central-agent" containerID="cri-o://e06fd97e967e8cdbfe09fb866c003975ee4e25ace627d0e6466debce41e49118" gracePeriod=30 Dec 01 03:17:51 crc kubenswrapper[4880]: I1201 03:17:51.124782 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="proxy-httpd" containerID="cri-o://9fc7f02e30a47b8e06f9a239fad9c63bac87dec63e24e46ed54c17e2216a847d" gracePeriod=30 Dec 01 03:17:51 crc kubenswrapper[4880]: I1201 03:17:51.124822 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="ceilometer-notification-agent" containerID="cri-o://ca3241d1838c53389b916b369734dd7af3b1e06e1b4f8b154b985b01c306e88b" gracePeriod=30 Dec 01 03:17:51 crc kubenswrapper[4880]: I1201 03:17:51.124791 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="sg-core" containerID="cri-o://7b93cd21c480092d45c10f0b12e1e40bb84a205f1ec5d6bd5f1515c9ff1dd3cc" gracePeriod=30 Dec 01 03:17:51 crc kubenswrapper[4880]: I1201 03:17:51.171387 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 03:17:51 crc kubenswrapper[4880]: W1201 03:17:51.176079 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa44b977_5a65_4beb_8a82_de3f5a813bd9.slice/crio-9f3e683f8d5c06f6136eecdc7ab94d32e34f73454a05063f10ff5191a37aeb29 WatchSource:0}: Error finding container 9f3e683f8d5c06f6136eecdc7ab94d32e34f73454a05063f10ff5191a37aeb29: Status 404 returned error can't find the container with id 9f3e683f8d5c06f6136eecdc7ab94d32e34f73454a05063f10ff5191a37aeb29 Dec 01 03:17:51 crc kubenswrapper[4880]: I1201 03:17:51.304204 4880 generic.go:334] "Generic (PLEG): container finished" podID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerID="7b93cd21c480092d45c10f0b12e1e40bb84a205f1ec5d6bd5f1515c9ff1dd3cc" exitCode=2 Dec 01 03:17:51 crc kubenswrapper[4880]: I1201 03:17:51.304300 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerDied","Data":"7b93cd21c480092d45c10f0b12e1e40bb84a205f1ec5d6bd5f1515c9ff1dd3cc"} Dec 01 03:17:51 crc kubenswrapper[4880]: I1201 03:17:51.310556 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fa44b977-5a65-4beb-8a82-de3f5a813bd9","Type":"ContainerStarted","Data":"9f3e683f8d5c06f6136eecdc7ab94d32e34f73454a05063f10ff5191a37aeb29"} Dec 01 03:17:52 crc kubenswrapper[4880]: I1201 03:17:52.325923 4880 generic.go:334] "Generic (PLEG): container finished" podID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerID="9fc7f02e30a47b8e06f9a239fad9c63bac87dec63e24e46ed54c17e2216a847d" exitCode=0 Dec 01 03:17:52 crc kubenswrapper[4880]: I1201 03:17:52.326224 4880 generic.go:334] "Generic (PLEG): container finished" podID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerID="e06fd97e967e8cdbfe09fb866c003975ee4e25ace627d0e6466debce41e49118" exitCode=0 Dec 01 03:17:52 crc kubenswrapper[4880]: I1201 03:17:52.326024 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerDied","Data":"9fc7f02e30a47b8e06f9a239fad9c63bac87dec63e24e46ed54c17e2216a847d"} Dec 01 03:17:52 crc kubenswrapper[4880]: I1201 03:17:52.326298 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerDied","Data":"e06fd97e967e8cdbfe09fb866c003975ee4e25ace627d0e6466debce41e49118"} Dec 01 03:17:52 crc kubenswrapper[4880]: I1201 03:17:52.327887 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fa44b977-5a65-4beb-8a82-de3f5a813bd9","Type":"ContainerStarted","Data":"3a705440295c682ca65aa6d33002d459135c59ce3f3e41881d71e1a5e6457fe1"} Dec 01 03:17:52 crc kubenswrapper[4880]: I1201 03:17:52.328016 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 03:17:52 crc kubenswrapper[4880]: I1201 03:17:52.349077 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.9693380459999998 podStartE2EDuration="2.349056963s" podCreationTimestamp="2025-12-01 03:17:50 +0000 UTC" firstStartedPulling="2025-12-01 03:17:51.177894642 +0000 UTC m=+1300.689149014" lastFinishedPulling="2025-12-01 03:17:51.557613559 +0000 UTC m=+1301.068867931" observedRunningTime="2025-12-01 03:17:52.341382006 +0000 UTC m=+1301.852636378" watchObservedRunningTime="2025-12-01 03:17:52.349056963 +0000 UTC m=+1301.860311335" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.357062 4880 generic.go:334] "Generic (PLEG): container finished" podID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerID="ca3241d1838c53389b916b369734dd7af3b1e06e1b4f8b154b985b01c306e88b" exitCode=0 Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.358169 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerDied","Data":"ca3241d1838c53389b916b369734dd7af3b1e06e1b4f8b154b985b01c306e88b"} Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.505524 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.639638 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-config-data\") pod \"f44572b9-ae8e-41eb-a937-90ea818187d9\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.639952 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-scripts\") pod \"f44572b9-ae8e-41eb-a937-90ea818187d9\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.646482 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-scripts" (OuterVolumeSpecName: "scripts") pod "f44572b9-ae8e-41eb-a937-90ea818187d9" (UID: "f44572b9-ae8e-41eb-a937-90ea818187d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.652631 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmllk\" (UniqueName: \"kubernetes.io/projected/f44572b9-ae8e-41eb-a937-90ea818187d9-kube-api-access-xmllk\") pod \"f44572b9-ae8e-41eb-a937-90ea818187d9\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.652825 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-run-httpd\") pod \"f44572b9-ae8e-41eb-a937-90ea818187d9\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.653195 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-combined-ca-bundle\") pod \"f44572b9-ae8e-41eb-a937-90ea818187d9\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.653377 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-log-httpd\") pod \"f44572b9-ae8e-41eb-a937-90ea818187d9\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.653528 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f44572b9-ae8e-41eb-a937-90ea818187d9" (UID: "f44572b9-ae8e-41eb-a937-90ea818187d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.653762 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-sg-core-conf-yaml\") pod \"f44572b9-ae8e-41eb-a937-90ea818187d9\" (UID: \"f44572b9-ae8e-41eb-a937-90ea818187d9\") " Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.654537 4880 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.654641 4880 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.654559 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f44572b9-ae8e-41eb-a937-90ea818187d9" (UID: "f44572b9-ae8e-41eb-a937-90ea818187d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.656052 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44572b9-ae8e-41eb-a937-90ea818187d9-kube-api-access-xmllk" (OuterVolumeSpecName: "kube-api-access-xmllk") pod "f44572b9-ae8e-41eb-a937-90ea818187d9" (UID: "f44572b9-ae8e-41eb-a937-90ea818187d9"). InnerVolumeSpecName "kube-api-access-xmllk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.694063 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f44572b9-ae8e-41eb-a937-90ea818187d9" (UID: "f44572b9-ae8e-41eb-a937-90ea818187d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.742166 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f44572b9-ae8e-41eb-a937-90ea818187d9" (UID: "f44572b9-ae8e-41eb-a937-90ea818187d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.755970 4880 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.756112 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmllk\" (UniqueName: \"kubernetes.io/projected/f44572b9-ae8e-41eb-a937-90ea818187d9-kube-api-access-xmllk\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.756176 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.756238 4880 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f44572b9-ae8e-41eb-a937-90ea818187d9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.781718 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-config-data" (OuterVolumeSpecName: "config-data") pod "f44572b9-ae8e-41eb-a937-90ea818187d9" (UID: "f44572b9-ae8e-41eb-a937-90ea818187d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:17:53 crc kubenswrapper[4880]: I1201 03:17:53.858102 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f44572b9-ae8e-41eb-a937-90ea818187d9-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.373716 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f44572b9-ae8e-41eb-a937-90ea818187d9","Type":"ContainerDied","Data":"16b0f55301ed0591be89df1a83c83269412e6d259242d8835aed8d229e27fc6a"} Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.375008 4880 scope.go:117] "RemoveContainer" containerID="9fc7f02e30a47b8e06f9a239fad9c63bac87dec63e24e46ed54c17e2216a847d" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.373818 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.402081 4880 scope.go:117] "RemoveContainer" containerID="7b93cd21c480092d45c10f0b12e1e40bb84a205f1ec5d6bd5f1515c9ff1dd3cc" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.424585 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.432686 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.435951 4880 scope.go:117] "RemoveContainer" containerID="ca3241d1838c53389b916b369734dd7af3b1e06e1b4f8b154b985b01c306e88b" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.459337 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:54 crc kubenswrapper[4880]: E1201 03:17:54.459776 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="proxy-httpd" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.459794 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="proxy-httpd" Dec 01 03:17:54 crc kubenswrapper[4880]: E1201 03:17:54.459815 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="sg-core" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.459822 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="sg-core" Dec 01 03:17:54 crc kubenswrapper[4880]: E1201 03:17:54.459837 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="ceilometer-notification-agent" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.459843 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="ceilometer-notification-agent" Dec 01 03:17:54 crc kubenswrapper[4880]: E1201 03:17:54.459865 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="ceilometer-central-agent" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.459887 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="ceilometer-central-agent" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.460041 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="ceilometer-central-agent" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.460052 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="ceilometer-notification-agent" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.460078 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="proxy-httpd" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.460084 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" containerName="sg-core" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.461949 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.467765 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.468651 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.468853 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.478636 4880 scope.go:117] "RemoveContainer" containerID="e06fd97e967e8cdbfe09fb866c003975ee4e25ace627d0e6466debce41e49118" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.499049 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.571201 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-config-data\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.571265 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.571319 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqz2p\" (UniqueName: \"kubernetes.io/projected/10321a9a-7170-4545-b0cb-ae57a4c06a13-kube-api-access-sqz2p\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.571354 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10321a9a-7170-4545-b0cb-ae57a4c06a13-log-httpd\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.571411 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-scripts\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.571441 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.571497 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.571532 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10321a9a-7170-4545-b0cb-ae57a4c06a13-run-httpd\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.678294 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqz2p\" (UniqueName: \"kubernetes.io/projected/10321a9a-7170-4545-b0cb-ae57a4c06a13-kube-api-access-sqz2p\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.678373 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10321a9a-7170-4545-b0cb-ae57a4c06a13-log-httpd\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.678412 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-scripts\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.678432 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.678485 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.678518 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10321a9a-7170-4545-b0cb-ae57a4c06a13-run-httpd\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.678588 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-config-data\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.678637 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.680163 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10321a9a-7170-4545-b0cb-ae57a4c06a13-run-httpd\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.680193 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/10321a9a-7170-4545-b0cb-ae57a4c06a13-log-httpd\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.689568 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-scripts\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.689985 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-config-data\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.692771 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.694882 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.702934 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/10321a9a-7170-4545-b0cb-ae57a4c06a13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.705830 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqz2p\" (UniqueName: \"kubernetes.io/projected/10321a9a-7170-4545-b0cb-ae57a4c06a13-kube-api-access-sqz2p\") pod \"ceilometer-0\" (UID: \"10321a9a-7170-4545-b0cb-ae57a4c06a13\") " pod="openstack/ceilometer-0" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.794305 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44572b9-ae8e-41eb-a937-90ea818187d9" path="/var/lib/kubelet/pods/f44572b9-ae8e-41eb-a937-90ea818187d9/volumes" Dec 01 03:17:54 crc kubenswrapper[4880]: I1201 03:17:54.795813 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 03:17:55 crc kubenswrapper[4880]: I1201 03:17:55.260896 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 03:17:55 crc kubenswrapper[4880]: I1201 03:17:55.382846 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10321a9a-7170-4545-b0cb-ae57a4c06a13","Type":"ContainerStarted","Data":"a53e7c722742df7e33023ec60174c0f13873d48c31abdccb7374f809d2b5f09b"} Dec 01 03:17:56 crc kubenswrapper[4880]: I1201 03:17:56.424028 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10321a9a-7170-4545-b0cb-ae57a4c06a13","Type":"ContainerStarted","Data":"567ab1d89f1e63c9ed4bdaea3a1cecaf720c2c4b18547f2e75022648179210dc"} Dec 01 03:17:56 crc kubenswrapper[4880]: I1201 03:17:56.424345 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10321a9a-7170-4545-b0cb-ae57a4c06a13","Type":"ContainerStarted","Data":"f2db0518e677b06be2bd7ea3c9759cd48091c0933ee624a393d3f8634c598ac2"} Dec 01 03:17:57 crc kubenswrapper[4880]: I1201 03:17:57.433164 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10321a9a-7170-4545-b0cb-ae57a4c06a13","Type":"ContainerStarted","Data":"ec77415f3c7b4390b241f609044300300eaed33bbb7430ba8315df062c910e90"} Dec 01 03:17:58 crc kubenswrapper[4880]: I1201 03:17:58.077955 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 03:17:58 crc kubenswrapper[4880]: I1201 03:17:58.442900 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"10321a9a-7170-4545-b0cb-ae57a4c06a13","Type":"ContainerStarted","Data":"6d28ee0c2588b8387818b5f0065461bfd9a4e8bac8ca1cea830d1fecf8987541"} Dec 01 03:17:58 crc kubenswrapper[4880]: I1201 03:17:58.443249 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 03:17:59 crc kubenswrapper[4880]: I1201 03:17:59.272257 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.663558932 podStartE2EDuration="5.272239162s" podCreationTimestamp="2025-12-01 03:17:54 +0000 UTC" firstStartedPulling="2025-12-01 03:17:55.282745105 +0000 UTC m=+1304.793999497" lastFinishedPulling="2025-12-01 03:17:57.891425355 +0000 UTC m=+1307.402679727" observedRunningTime="2025-12-01 03:17:58.47512601 +0000 UTC m=+1307.986380382" watchObservedRunningTime="2025-12-01 03:17:59.272239162 +0000 UTC m=+1308.783493534" Dec 01 03:17:59 crc kubenswrapper[4880]: I1201 03:17:59.278018 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 03:18:00 crc kubenswrapper[4880]: I1201 03:18:00.715800 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 01 03:18:03 crc kubenswrapper[4880]: I1201 03:18:03.171663 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" containerName="rabbitmq" containerID="cri-o://63a1523ecb3393de74ecade1408c7411b563e31b3d9177f58986ad2f3850a22a" gracePeriod=604795 Dec 01 03:18:04 crc kubenswrapper[4880]: I1201 03:18:04.153654 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" containerName="rabbitmq" containerID="cri-o://8471f031cfe28e09c92a798118777d4337f58622935c44f175d34afcdbcc9fa2" gracePeriod=604796 Dec 01 03:18:08 crc kubenswrapper[4880]: I1201 03:18:08.822205 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 01 03:18:09 crc kubenswrapper[4880]: I1201 03:18:09.462247 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 01 03:18:09 crc kubenswrapper[4880]: I1201 03:18:09.546050 4880 generic.go:334] "Generic (PLEG): container finished" podID="d7b466f3-1cab-4282-963d-2cf055d1514f" containerID="63a1523ecb3393de74ecade1408c7411b563e31b3d9177f58986ad2f3850a22a" exitCode=0 Dec 01 03:18:09 crc kubenswrapper[4880]: I1201 03:18:09.546141 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d7b466f3-1cab-4282-963d-2cf055d1514f","Type":"ContainerDied","Data":"63a1523ecb3393de74ecade1408c7411b563e31b3d9177f58986ad2f3850a22a"} Dec 01 03:18:09 crc kubenswrapper[4880]: I1201 03:18:09.796281 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.012675 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-server-conf\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.012734 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-erlang-cookie\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.012755 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-plugins\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.012805 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7b466f3-1cab-4282-963d-2cf055d1514f-pod-info\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.012857 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgn98\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-kube-api-access-sgn98\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.012908 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-tls\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.012949 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-confd\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.012968 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-plugins-conf\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.012988 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.013003 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-config-data\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.013031 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7b466f3-1cab-4282-963d-2cf055d1514f-erlang-cookie-secret\") pod \"d7b466f3-1cab-4282-963d-2cf055d1514f\" (UID: \"d7b466f3-1cab-4282-963d-2cf055d1514f\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.015102 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.015628 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.015862 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.035441 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b466f3-1cab-4282-963d-2cf055d1514f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.038285 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-kube-api-access-sgn98" (OuterVolumeSpecName: "kube-api-access-sgn98") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "kube-api-access-sgn98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.045313 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d7b466f3-1cab-4282-963d-2cf055d1514f-pod-info" (OuterVolumeSpecName: "pod-info") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.056764 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.066572 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.103608 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-config-data" (OuterVolumeSpecName: "config-data") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.118386 4880 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7b466f3-1cab-4282-963d-2cf055d1514f-pod-info\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.118423 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgn98\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-kube-api-access-sgn98\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.118434 4880 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.118443 4880 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.118451 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.118476 4880 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.118486 4880 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7b466f3-1cab-4282-963d-2cf055d1514f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.118495 4880 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.118505 4880 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.147909 4880 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.177927 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-server-conf" (OuterVolumeSpecName: "server-conf") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.206858 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d7b466f3-1cab-4282-963d-2cf055d1514f" (UID: "d7b466f3-1cab-4282-963d-2cf055d1514f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.220019 4880 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7b466f3-1cab-4282-963d-2cf055d1514f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.220072 4880 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.220086 4880 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7b466f3-1cab-4282-963d-2cf055d1514f-server-conf\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.559488 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d7b466f3-1cab-4282-963d-2cf055d1514f","Type":"ContainerDied","Data":"93b18078e1f7eb4a0510c1b412a4243507cf9c4916e87205ab606c82d43f26ad"} Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.559522 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.559545 4880 scope.go:117] "RemoveContainer" containerID="63a1523ecb3393de74ecade1408c7411b563e31b3d9177f58986ad2f3850a22a" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.563766 4880 generic.go:334] "Generic (PLEG): container finished" podID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" containerID="8471f031cfe28e09c92a798118777d4337f58622935c44f175d34afcdbcc9fa2" exitCode=0 Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.563805 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd","Type":"ContainerDied","Data":"8471f031cfe28e09c92a798118777d4337f58622935c44f175d34afcdbcc9fa2"} Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.613052 4880 scope.go:117] "RemoveContainer" containerID="f0877f6eb973ad9dbf0a4f3dba4a79a8b3f449d6a63ad7c8fcb84bb8765bd5b7" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.625522 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.640180 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.666925 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 03:18:10 crc kubenswrapper[4880]: E1201 03:18:10.667340 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" containerName="rabbitmq" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.667359 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" containerName="rabbitmq" Dec 01 03:18:10 crc kubenswrapper[4880]: E1201 03:18:10.667367 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" containerName="setup-container" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.667374 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" containerName="setup-container" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.667555 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" containerName="rabbitmq" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.671107 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.678705 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.679465 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.679746 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.680044 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.680254 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.681084 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.681297 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4jtkx" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.681510 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.810472 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b466f3-1cab-4282-963d-2cf055d1514f" path="/var/lib/kubelet/pods/d7b466f3-1cab-4282-963d-2cf055d1514f/volumes" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.820339 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834380 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d0d4ce3-3730-423f-89a7-8190f9275a00-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834450 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d0d4ce3-3730-423f-89a7-8190f9275a00-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834491 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834531 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d0d4ce3-3730-423f-89a7-8190f9275a00-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834571 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834612 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d0d4ce3-3730-423f-89a7-8190f9275a00-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834630 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834644 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcpsk\" (UniqueName: \"kubernetes.io/projected/6d0d4ce3-3730-423f-89a7-8190f9275a00-kube-api-access-fcpsk\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834668 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834754 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d0d4ce3-3730-423f-89a7-8190f9275a00-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.834772 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935433 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-plugins-conf\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935497 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-plugins\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935582 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935600 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-confd\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935619 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-pod-info\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935665 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-erlang-cookie-secret\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935697 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrbgp\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-kube-api-access-rrbgp\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935738 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-tls\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935758 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-config-data\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935781 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-server-conf\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.935807 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-erlang-cookie\") pod \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\" (UID: \"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd\") " Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936028 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936155 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d0d4ce3-3730-423f-89a7-8190f9275a00-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936174 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936201 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d0d4ce3-3730-423f-89a7-8190f9275a00-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936228 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d0d4ce3-3730-423f-89a7-8190f9275a00-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936243 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936261 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d0d4ce3-3730-423f-89a7-8190f9275a00-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936279 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936319 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d0d4ce3-3730-423f-89a7-8190f9275a00-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936335 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.936349 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcpsk\" (UniqueName: \"kubernetes.io/projected/6d0d4ce3-3730-423f-89a7-8190f9275a00-kube-api-access-fcpsk\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.946555 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.947265 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.948104 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.950643 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.953247 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.949170 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.966128 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.966436 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.967679 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d0d4ce3-3730-423f-89a7-8190f9275a00-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.953291 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.957104 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.971380 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d0d4ce3-3730-423f-89a7-8190f9275a00-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.972155 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.972375 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.976228 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.977972 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.978925 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.990541 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d0d4ce3-3730-423f-89a7-8190f9275a00-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.991067 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-kube-api-access-rrbgp" (OuterVolumeSpecName: "kube-api-access-rrbgp") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "kube-api-access-rrbgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.992264 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d0d4ce3-3730-423f-89a7-8190f9275a00-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.992444 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-pod-info" (OuterVolumeSpecName: "pod-info") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.993086 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcpsk\" (UniqueName: \"kubernetes.io/projected/6d0d4ce3-3730-423f-89a7-8190f9275a00-kube-api-access-fcpsk\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:10 crc kubenswrapper[4880]: I1201 03:18:10.998477 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:10.998710 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d0d4ce3-3730-423f-89a7-8190f9275a00-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.054435 4880 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.054460 4880 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.054485 4880 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.054494 4880 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-pod-info\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.054503 4880 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.054514 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrbgp\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-kube-api-access-rrbgp\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.054522 4880 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.054533 4880 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.060171 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d0d4ce3-3730-423f-89a7-8190f9275a00-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.133901 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-server-conf" (OuterVolumeSpecName: "server-conf") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.134436 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-config-data" (OuterVolumeSpecName: "config-data") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.168097 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"6d0d4ce3-3730-423f-89a7-8190f9275a00\") " pod="openstack/rabbitmq-server-0" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.172242 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.172266 4880 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-server-conf\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.179121 4880 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.254208 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" (UID: "fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.274115 4880 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.274146 4880 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.419279 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4jtkx" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.428080 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.579926 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd","Type":"ContainerDied","Data":"b8bd0258bc7cddee83a8378215e1d6582db40e71dbfa6674786878c12a60d406"} Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.579968 4880 scope.go:117] "RemoveContainer" containerID="8471f031cfe28e09c92a798118777d4337f58622935c44f175d34afcdbcc9fa2" Dec 01 03:18:11 crc kubenswrapper[4880]: I1201 03:18:11.580055 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.645187 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.646158 4880 scope.go:117] "RemoveContainer" containerID="5e495af25e395418128c51e910ef72ae6a4966db29958e2b1fc02c70abaf0de2" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.680147 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.704509 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 03:18:12 crc kubenswrapper[4880]: E1201 03:18:11.705144 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" containerName="rabbitmq" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.705156 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" containerName="rabbitmq" Dec 01 03:18:12 crc kubenswrapper[4880]: E1201 03:18:11.705174 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" containerName="setup-container" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.705181 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" containerName="setup-container" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.705362 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" containerName="rabbitmq" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.706356 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.713781 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.718253 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.718561 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.718767 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5hwf6" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.718822 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.718906 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.718992 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.718995 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.884447 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78bc6dbb45-4x9fm"] Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.887656 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.887904 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f49c9908-451f-43c0-a143-8269153c4a4f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.887950 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f49c9908-451f-43c0-a143-8269153c4a4f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.887986 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.888002 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.888049 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.888102 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpg7r\" (UniqueName: \"kubernetes.io/projected/f49c9908-451f-43c0-a143-8269153c4a4f-kube-api-access-xpg7r\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.888154 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f49c9908-451f-43c0-a143-8269153c4a4f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.888171 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f49c9908-451f-43c0-a143-8269153c4a4f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.888197 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f49c9908-451f-43c0-a143-8269153c4a4f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.888231 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.888259 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.894840 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.906195 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78bc6dbb45-4x9fm"] Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.989912 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcv68\" (UniqueName: \"kubernetes.io/projected/a347b94c-07af-4c7e-8642-bb8d1805fcba-kube-api-access-vcv68\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.989948 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.989968 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-sb\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.989999 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-config\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990041 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpg7r\" (UniqueName: \"kubernetes.io/projected/f49c9908-451f-43c0-a143-8269153c4a4f-kube-api-access-xpg7r\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990067 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-nb\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990090 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-openstack-edpm-ipam\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990123 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f49c9908-451f-43c0-a143-8269153c4a4f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990139 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f49c9908-451f-43c0-a143-8269153c4a4f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990164 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f49c9908-451f-43c0-a143-8269153c4a4f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990181 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990204 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990227 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f49c9908-451f-43c0-a143-8269153c4a4f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990242 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-svc\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990270 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f49c9908-451f-43c0-a143-8269153c4a4f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990288 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990305 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-swift-storage-0\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990322 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.990680 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.995762 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f49c9908-451f-43c0-a143-8269153c4a4f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.996577 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.996616 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f49c9908-451f-43c0-a143-8269153c4a4f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.997219 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f49c9908-451f-43c0-a143-8269153c4a4f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.997459 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:11.999664 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.000759 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f49c9908-451f-43c0-a143-8269153c4a4f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.000762 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f49c9908-451f-43c0-a143-8269153c4a4f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.007650 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f49c9908-451f-43c0-a143-8269153c4a4f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.015399 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpg7r\" (UniqueName: \"kubernetes.io/projected/f49c9908-451f-43c0-a143-8269153c4a4f-kube-api-access-xpg7r\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.051417 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f49c9908-451f-43c0-a143-8269153c4a4f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.091547 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-nb\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.091599 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-openstack-edpm-ipam\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.091678 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-svc\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.091712 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-swift-storage-0\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.091749 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcv68\" (UniqueName: \"kubernetes.io/projected/a347b94c-07af-4c7e-8642-bb8d1805fcba-kube-api-access-vcv68\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.091768 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-sb\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.091793 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-config\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.092824 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-config\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.093167 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-nb\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.093301 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-svc\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.093377 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-sb\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.094132 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-swift-storage-0\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.094618 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-openstack-edpm-ipam\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.108381 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcv68\" (UniqueName: \"kubernetes.io/projected/a347b94c-07af-4c7e-8642-bb8d1805fcba-kube-api-access-vcv68\") pod \"dnsmasq-dns-78bc6dbb45-4x9fm\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.218506 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.340915 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.644118 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.760639 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78bc6dbb45-4x9fm"] Dec 01 03:18:12 crc kubenswrapper[4880]: W1201 03:18:12.764862 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda347b94c_07af_4c7e_8642_bb8d1805fcba.slice/crio-19c736c8e6c382000206c6a580084ac3435131220695fb267e8689ffb00bc4b1 WatchSource:0}: Error finding container 19c736c8e6c382000206c6a580084ac3435131220695fb267e8689ffb00bc4b1: Status 404 returned error can't find the container with id 19c736c8e6c382000206c6a580084ac3435131220695fb267e8689ffb00bc4b1 Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.800759 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd" path="/var/lib/kubelet/pods/fa32fb63-f8bd-4747-9c6e-9dbe58c1d9bd/volumes" Dec 01 03:18:12 crc kubenswrapper[4880]: W1201 03:18:12.869304 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf49c9908_451f_43c0_a143_8269153c4a4f.slice/crio-7ad8c0c70689f42f4a2b0525e9d83288c3fe3a21ddd95ad830c337e422d1b956 WatchSource:0}: Error finding container 7ad8c0c70689f42f4a2b0525e9d83288c3fe3a21ddd95ad830c337e422d1b956: Status 404 returned error can't find the container with id 7ad8c0c70689f42f4a2b0525e9d83288c3fe3a21ddd95ad830c337e422d1b956 Dec 01 03:18:12 crc kubenswrapper[4880]: I1201 03:18:12.869235 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 03:18:13 crc kubenswrapper[4880]: I1201 03:18:13.607111 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d0d4ce3-3730-423f-89a7-8190f9275a00","Type":"ContainerStarted","Data":"b442807155d57525bd6fc5d285ec2a0618d0fc8b9bfcfa2fcb4a7be30e5cab2b"} Dec 01 03:18:13 crc kubenswrapper[4880]: I1201 03:18:13.610138 4880 generic.go:334] "Generic (PLEG): container finished" podID="a347b94c-07af-4c7e-8642-bb8d1805fcba" containerID="318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a" exitCode=0 Dec 01 03:18:13 crc kubenswrapper[4880]: I1201 03:18:13.610202 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" event={"ID":"a347b94c-07af-4c7e-8642-bb8d1805fcba","Type":"ContainerDied","Data":"318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a"} Dec 01 03:18:13 crc kubenswrapper[4880]: I1201 03:18:13.610270 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" event={"ID":"a347b94c-07af-4c7e-8642-bb8d1805fcba","Type":"ContainerStarted","Data":"19c736c8e6c382000206c6a580084ac3435131220695fb267e8689ffb00bc4b1"} Dec 01 03:18:13 crc kubenswrapper[4880]: I1201 03:18:13.611931 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f49c9908-451f-43c0-a143-8269153c4a4f","Type":"ContainerStarted","Data":"7ad8c0c70689f42f4a2b0525e9d83288c3fe3a21ddd95ad830c337e422d1b956"} Dec 01 03:18:14 crc kubenswrapper[4880]: I1201 03:18:14.627304 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f49c9908-451f-43c0-a143-8269153c4a4f","Type":"ContainerStarted","Data":"c0e9f30a7376eb5a53222c800d18bc9277b442cb3d45f5056840c2b86760d9b3"} Dec 01 03:18:14 crc kubenswrapper[4880]: I1201 03:18:14.630699 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d0d4ce3-3730-423f-89a7-8190f9275a00","Type":"ContainerStarted","Data":"80d09d162a28cb221a1690c329e940f173eff33187473e5786b45e334994936f"} Dec 01 03:18:14 crc kubenswrapper[4880]: I1201 03:18:14.637795 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" event={"ID":"a347b94c-07af-4c7e-8642-bb8d1805fcba","Type":"ContainerStarted","Data":"c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec"} Dec 01 03:18:14 crc kubenswrapper[4880]: I1201 03:18:14.639972 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:14 crc kubenswrapper[4880]: I1201 03:18:14.703391 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" podStartSLOduration=3.703372244 podStartE2EDuration="3.703372244s" podCreationTimestamp="2025-12-01 03:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:18:14.693352259 +0000 UTC m=+1324.204606631" watchObservedRunningTime="2025-12-01 03:18:14.703372244 +0000 UTC m=+1324.214626616" Dec 01 03:18:17 crc kubenswrapper[4880]: I1201 03:18:17.369793 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:18:17 crc kubenswrapper[4880]: I1201 03:18:17.370135 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.221189 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.319786 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f7897ddc5-444fn"] Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.320012 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" podUID="f40b8e9f-f0a2-41fb-9141-80262a6f64bb" containerName="dnsmasq-dns" containerID="cri-o://1f22914ddc5d7595b5517c3afda303efadda1a4edd095190f5e2a98365228198" gracePeriod=10 Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.567411 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-687dd88987-t76fg"] Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.569447 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.658968 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-openstack-edpm-ipam\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.659045 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-dns-swift-storage-0\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.660078 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-config\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.660125 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-ovsdbserver-sb\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.660198 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-dns-svc\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.660236 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-ovsdbserver-nb\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.660258 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qcp8\" (UniqueName: \"kubernetes.io/projected/703b1c0c-6316-4623-b386-60f4b7a23776-kube-api-access-9qcp8\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.674953 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-687dd88987-t76fg"] Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.768033 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-config\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.768074 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-ovsdbserver-sb\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.768119 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-dns-svc\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.768137 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-ovsdbserver-nb\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.768154 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qcp8\" (UniqueName: \"kubernetes.io/projected/703b1c0c-6316-4623-b386-60f4b7a23776-kube-api-access-9qcp8\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.768196 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-openstack-edpm-ipam\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.768228 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-dns-swift-storage-0\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.769167 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-dns-swift-storage-0\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.769672 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-ovsdbserver-nb\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.770612 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-openstack-edpm-ipam\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.771122 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-config\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.771601 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-ovsdbserver-sb\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.771690 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/703b1c0c-6316-4623-b386-60f4b7a23776-dns-svc\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.784632 4880 generic.go:334] "Generic (PLEG): container finished" podID="f40b8e9f-f0a2-41fb-9141-80262a6f64bb" containerID="1f22914ddc5d7595b5517c3afda303efadda1a4edd095190f5e2a98365228198" exitCode=0 Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.784673 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" event={"ID":"f40b8e9f-f0a2-41fb-9141-80262a6f64bb","Type":"ContainerDied","Data":"1f22914ddc5d7595b5517c3afda303efadda1a4edd095190f5e2a98365228198"} Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.860905 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qcp8\" (UniqueName: \"kubernetes.io/projected/703b1c0c-6316-4623-b386-60f4b7a23776-kube-api-access-9qcp8\") pod \"dnsmasq-dns-687dd88987-t76fg\" (UID: \"703b1c0c-6316-4623-b386-60f4b7a23776\") " pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:22 crc kubenswrapper[4880]: I1201 03:18:22.892340 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.266320 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.383018 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-svc\") pod \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.383339 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-config\") pod \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.383361 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-sb\") pod \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.383398 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgbbg\" (UniqueName: \"kubernetes.io/projected/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-kube-api-access-cgbbg\") pod \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.383425 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-swift-storage-0\") pod \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.383495 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-nb\") pod \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\" (UID: \"f40b8e9f-f0a2-41fb-9141-80262a6f64bb\") " Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.393102 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-kube-api-access-cgbbg" (OuterVolumeSpecName: "kube-api-access-cgbbg") pod "f40b8e9f-f0a2-41fb-9141-80262a6f64bb" (UID: "f40b8e9f-f0a2-41fb-9141-80262a6f64bb"). InnerVolumeSpecName "kube-api-access-cgbbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.477371 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-687dd88987-t76fg"] Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.477885 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f40b8e9f-f0a2-41fb-9141-80262a6f64bb" (UID: "f40b8e9f-f0a2-41fb-9141-80262a6f64bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.487183 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.487258 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgbbg\" (UniqueName: \"kubernetes.io/projected/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-kube-api-access-cgbbg\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.505422 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-config" (OuterVolumeSpecName: "config") pod "f40b8e9f-f0a2-41fb-9141-80262a6f64bb" (UID: "f40b8e9f-f0a2-41fb-9141-80262a6f64bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.510293 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f40b8e9f-f0a2-41fb-9141-80262a6f64bb" (UID: "f40b8e9f-f0a2-41fb-9141-80262a6f64bb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.528849 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f40b8e9f-f0a2-41fb-9141-80262a6f64bb" (UID: "f40b8e9f-f0a2-41fb-9141-80262a6f64bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.546980 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f40b8e9f-f0a2-41fb-9141-80262a6f64bb" (UID: "f40b8e9f-f0a2-41fb-9141-80262a6f64bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.595904 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.595936 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.595947 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.595955 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f40b8e9f-f0a2-41fb-9141-80262a6f64bb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.795028 4880 generic.go:334] "Generic (PLEG): container finished" podID="703b1c0c-6316-4623-b386-60f4b7a23776" containerID="b43d4f3e70f4d0cfb41fc6df7fcaa6d90161d695c732514c0e1a5f208584c6f2" exitCode=0 Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.795332 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687dd88987-t76fg" event={"ID":"703b1c0c-6316-4623-b386-60f4b7a23776","Type":"ContainerDied","Data":"b43d4f3e70f4d0cfb41fc6df7fcaa6d90161d695c732514c0e1a5f208584c6f2"} Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.795377 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687dd88987-t76fg" event={"ID":"703b1c0c-6316-4623-b386-60f4b7a23776","Type":"ContainerStarted","Data":"433c5014996a2719f92403663d45010ecbdcd0d9ac0fb9807a510e451e03cc00"} Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.801207 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" event={"ID":"f40b8e9f-f0a2-41fb-9141-80262a6f64bb","Type":"ContainerDied","Data":"bf8f07c382cd3c31499c95f63e769019c17cf76f30749abe10379586dafd5577"} Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.801287 4880 scope.go:117] "RemoveContainer" containerID="1f22914ddc5d7595b5517c3afda303efadda1a4edd095190f5e2a98365228198" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.801443 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7897ddc5-444fn" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.896422 4880 scope.go:117] "RemoveContainer" containerID="9b2d3d079313049e00b83486dcb839620014dd5107516d883c7abd737aabcaf5" Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.911930 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f7897ddc5-444fn"] Dec 01 03:18:23 crc kubenswrapper[4880]: I1201 03:18:23.919387 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f7897ddc5-444fn"] Dec 01 03:18:24 crc kubenswrapper[4880]: I1201 03:18:24.807982 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40b8e9f-f0a2-41fb-9141-80262a6f64bb" path="/var/lib/kubelet/pods/f40b8e9f-f0a2-41fb-9141-80262a6f64bb/volumes" Dec 01 03:18:24 crc kubenswrapper[4880]: I1201 03:18:24.813620 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 03:18:24 crc kubenswrapper[4880]: I1201 03:18:24.821664 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-687dd88987-t76fg" event={"ID":"703b1c0c-6316-4623-b386-60f4b7a23776","Type":"ContainerStarted","Data":"e6d6b96e6d7f05e8bd9e40262f6fcad4e82c46c6bef10512df6a329cc3414885"} Dec 01 03:18:24 crc kubenswrapper[4880]: I1201 03:18:24.821893 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:24 crc kubenswrapper[4880]: I1201 03:18:24.854058 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-687dd88987-t76fg" podStartSLOduration=2.854026214 podStartE2EDuration="2.854026214s" podCreationTimestamp="2025-12-01 03:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:18:24.852431325 +0000 UTC m=+1334.363685707" watchObservedRunningTime="2025-12-01 03:18:24.854026214 +0000 UTC m=+1334.365280576" Dec 01 03:18:28 crc kubenswrapper[4880]: I1201 03:18:28.180374 4880 scope.go:117] "RemoveContainer" containerID="a366fccd938fbc0ecbfe77999f385db9c58fbab064a55cfd0aec9a88003d1997" Dec 01 03:18:28 crc kubenswrapper[4880]: I1201 03:18:28.208635 4880 scope.go:117] "RemoveContainer" containerID="861ab21707c5a233dccf644b4c19ecfd8a448e22256b77c22bc9766977eec2d9" Dec 01 03:18:32 crc kubenswrapper[4880]: I1201 03:18:32.899159 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-687dd88987-t76fg" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.011961 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78bc6dbb45-4x9fm"] Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.012180 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" podUID="a347b94c-07af-4c7e-8642-bb8d1805fcba" containerName="dnsmasq-dns" containerID="cri-o://c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec" gracePeriod=10 Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.534471 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.695133 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcv68\" (UniqueName: \"kubernetes.io/projected/a347b94c-07af-4c7e-8642-bb8d1805fcba-kube-api-access-vcv68\") pod \"a347b94c-07af-4c7e-8642-bb8d1805fcba\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.695312 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-openstack-edpm-ipam\") pod \"a347b94c-07af-4c7e-8642-bb8d1805fcba\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.695350 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-config\") pod \"a347b94c-07af-4c7e-8642-bb8d1805fcba\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.695372 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-svc\") pod \"a347b94c-07af-4c7e-8642-bb8d1805fcba\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.695424 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-swift-storage-0\") pod \"a347b94c-07af-4c7e-8642-bb8d1805fcba\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.695465 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-sb\") pod \"a347b94c-07af-4c7e-8642-bb8d1805fcba\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.695500 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-nb\") pod \"a347b94c-07af-4c7e-8642-bb8d1805fcba\" (UID: \"a347b94c-07af-4c7e-8642-bb8d1805fcba\") " Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.706688 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a347b94c-07af-4c7e-8642-bb8d1805fcba-kube-api-access-vcv68" (OuterVolumeSpecName: "kube-api-access-vcv68") pod "a347b94c-07af-4c7e-8642-bb8d1805fcba" (UID: "a347b94c-07af-4c7e-8642-bb8d1805fcba"). InnerVolumeSpecName "kube-api-access-vcv68". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.810939 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcv68\" (UniqueName: \"kubernetes.io/projected/a347b94c-07af-4c7e-8642-bb8d1805fcba-kube-api-access-vcv68\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.848978 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a347b94c-07af-4c7e-8642-bb8d1805fcba" (UID: "a347b94c-07af-4c7e-8642-bb8d1805fcba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.861295 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a347b94c-07af-4c7e-8642-bb8d1805fcba" (UID: "a347b94c-07af-4c7e-8642-bb8d1805fcba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.881717 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a347b94c-07af-4c7e-8642-bb8d1805fcba" (UID: "a347b94c-07af-4c7e-8642-bb8d1805fcba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.885535 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a347b94c-07af-4c7e-8642-bb8d1805fcba" (UID: "a347b94c-07af-4c7e-8642-bb8d1805fcba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.898085 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-config" (OuterVolumeSpecName: "config") pod "a347b94c-07af-4c7e-8642-bb8d1805fcba" (UID: "a347b94c-07af-4c7e-8642-bb8d1805fcba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.902467 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a347b94c-07af-4c7e-8642-bb8d1805fcba" (UID: "a347b94c-07af-4c7e-8642-bb8d1805fcba"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.912541 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.912571 4880 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.912586 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-config\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.912596 4880 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.912604 4880 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.912612 4880 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a347b94c-07af-4c7e-8642-bb8d1805fcba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.924640 4880 generic.go:334] "Generic (PLEG): container finished" podID="a347b94c-07af-4c7e-8642-bb8d1805fcba" containerID="c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec" exitCode=0 Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.924700 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" event={"ID":"a347b94c-07af-4c7e-8642-bb8d1805fcba","Type":"ContainerDied","Data":"c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec"} Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.924766 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" event={"ID":"a347b94c-07af-4c7e-8642-bb8d1805fcba","Type":"ContainerDied","Data":"19c736c8e6c382000206c6a580084ac3435131220695fb267e8689ffb00bc4b1"} Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.924790 4880 scope.go:117] "RemoveContainer" containerID="c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.924729 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78bc6dbb45-4x9fm" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.958145 4880 scope.go:117] "RemoveContainer" containerID="318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.966905 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78bc6dbb45-4x9fm"] Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.973718 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78bc6dbb45-4x9fm"] Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.976712 4880 scope.go:117] "RemoveContainer" containerID="c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec" Dec 01 03:18:33 crc kubenswrapper[4880]: E1201 03:18:33.979090 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec\": container with ID starting with c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec not found: ID does not exist" containerID="c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.979129 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec"} err="failed to get container status \"c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec\": rpc error: code = NotFound desc = could not find container \"c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec\": container with ID starting with c76e62bc5a2c4c4b7eccc282e6c790c50d0c86c94711fbe266ebfc2db97999ec not found: ID does not exist" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.979159 4880 scope.go:117] "RemoveContainer" containerID="318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a" Dec 01 03:18:33 crc kubenswrapper[4880]: E1201 03:18:33.979438 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a\": container with ID starting with 318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a not found: ID does not exist" containerID="318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a" Dec 01 03:18:33 crc kubenswrapper[4880]: I1201 03:18:33.979472 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a"} err="failed to get container status \"318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a\": rpc error: code = NotFound desc = could not find container \"318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a\": container with ID starting with 318a128dd022b1f9393edf248c946cfb387b4af6d53e1f7fcaf0d16efded1e3a not found: ID does not exist" Dec 01 03:18:34 crc kubenswrapper[4880]: I1201 03:18:34.795089 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a347b94c-07af-4c7e-8642-bb8d1805fcba" path="/var/lib/kubelet/pods/a347b94c-07af-4c7e-8642-bb8d1805fcba/volumes" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.481403 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf"] Dec 01 03:18:45 crc kubenswrapper[4880]: E1201 03:18:45.482172 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40b8e9f-f0a2-41fb-9141-80262a6f64bb" containerName="dnsmasq-dns" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.482184 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40b8e9f-f0a2-41fb-9141-80262a6f64bb" containerName="dnsmasq-dns" Dec 01 03:18:45 crc kubenswrapper[4880]: E1201 03:18:45.482204 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a347b94c-07af-4c7e-8642-bb8d1805fcba" containerName="dnsmasq-dns" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.482210 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a347b94c-07af-4c7e-8642-bb8d1805fcba" containerName="dnsmasq-dns" Dec 01 03:18:45 crc kubenswrapper[4880]: E1201 03:18:45.482226 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a347b94c-07af-4c7e-8642-bb8d1805fcba" containerName="init" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.482232 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a347b94c-07af-4c7e-8642-bb8d1805fcba" containerName="init" Dec 01 03:18:45 crc kubenswrapper[4880]: E1201 03:18:45.482242 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40b8e9f-f0a2-41fb-9141-80262a6f64bb" containerName="init" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.482247 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40b8e9f-f0a2-41fb-9141-80262a6f64bb" containerName="init" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.482400 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a347b94c-07af-4c7e-8642-bb8d1805fcba" containerName="dnsmasq-dns" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.482420 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40b8e9f-f0a2-41fb-9141-80262a6f64bb" containerName="dnsmasq-dns" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.483005 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.487424 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.487692 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.487802 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.488269 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.500527 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf"] Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.556885 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5dz\" (UniqueName: \"kubernetes.io/projected/a8c811e3-0ac3-491c-9d43-7b046f92f669-kube-api-access-th5dz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.556969 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.557235 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.557612 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.659535 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.659650 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.659688 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th5dz\" (UniqueName: \"kubernetes.io/projected/a8c811e3-0ac3-491c-9d43-7b046f92f669-kube-api-access-th5dz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.659725 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.665728 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.665734 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.668268 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.691585 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th5dz\" (UniqueName: \"kubernetes.io/projected/a8c811e3-0ac3-491c-9d43-7b046f92f669-kube-api-access-th5dz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:45 crc kubenswrapper[4880]: I1201 03:18:45.805981 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:18:46 crc kubenswrapper[4880]: W1201 03:18:46.396494 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c811e3_0ac3_491c_9d43_7b046f92f669.slice/crio-c1e84cc31f7adf275d29470f7a7d5662bdb4f057d79e8db5c3c16c108356ff37 WatchSource:0}: Error finding container c1e84cc31f7adf275d29470f7a7d5662bdb4f057d79e8db5c3c16c108356ff37: Status 404 returned error can't find the container with id c1e84cc31f7adf275d29470f7a7d5662bdb4f057d79e8db5c3c16c108356ff37 Dec 01 03:18:46 crc kubenswrapper[4880]: I1201 03:18:46.399588 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf"] Dec 01 03:18:47 crc kubenswrapper[4880]: I1201 03:18:47.067619 4880 generic.go:334] "Generic (PLEG): container finished" podID="6d0d4ce3-3730-423f-89a7-8190f9275a00" containerID="80d09d162a28cb221a1690c329e940f173eff33187473e5786b45e334994936f" exitCode=0 Dec 01 03:18:47 crc kubenswrapper[4880]: I1201 03:18:47.067897 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d0d4ce3-3730-423f-89a7-8190f9275a00","Type":"ContainerDied","Data":"80d09d162a28cb221a1690c329e940f173eff33187473e5786b45e334994936f"} Dec 01 03:18:47 crc kubenswrapper[4880]: I1201 03:18:47.070127 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" event={"ID":"a8c811e3-0ac3-491c-9d43-7b046f92f669","Type":"ContainerStarted","Data":"c1e84cc31f7adf275d29470f7a7d5662bdb4f057d79e8db5c3c16c108356ff37"} Dec 01 03:18:47 crc kubenswrapper[4880]: I1201 03:18:47.369683 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:18:47 crc kubenswrapper[4880]: I1201 03:18:47.370152 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:18:47 crc kubenswrapper[4880]: I1201 03:18:47.370210 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:18:47 crc kubenswrapper[4880]: I1201 03:18:47.370952 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fa73d5a87af237b0d0a9c3f24f3c3545af69a32a8108a4ef1e39e8382145766"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:18:47 crc kubenswrapper[4880]: I1201 03:18:47.371030 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://8fa73d5a87af237b0d0a9c3f24f3c3545af69a32a8108a4ef1e39e8382145766" gracePeriod=600 Dec 01 03:18:48 crc kubenswrapper[4880]: I1201 03:18:48.083260 4880 generic.go:334] "Generic (PLEG): container finished" podID="f49c9908-451f-43c0-a143-8269153c4a4f" containerID="c0e9f30a7376eb5a53222c800d18bc9277b442cb3d45f5056840c2b86760d9b3" exitCode=0 Dec 01 03:18:48 crc kubenswrapper[4880]: I1201 03:18:48.083340 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f49c9908-451f-43c0-a143-8269153c4a4f","Type":"ContainerDied","Data":"c0e9f30a7376eb5a53222c800d18bc9277b442cb3d45f5056840c2b86760d9b3"} Dec 01 03:18:48 crc kubenswrapper[4880]: I1201 03:18:48.091913 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="8fa73d5a87af237b0d0a9c3f24f3c3545af69a32a8108a4ef1e39e8382145766" exitCode=0 Dec 01 03:18:48 crc kubenswrapper[4880]: I1201 03:18:48.092018 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"8fa73d5a87af237b0d0a9c3f24f3c3545af69a32a8108a4ef1e39e8382145766"} Dec 01 03:18:48 crc kubenswrapper[4880]: I1201 03:18:48.092134 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4"} Dec 01 03:18:48 crc kubenswrapper[4880]: I1201 03:18:48.092159 4880 scope.go:117] "RemoveContainer" containerID="34d41201e834b41f2c5149b0278e08d421cef1c0ed99b101f5ffb45ff209ff57" Dec 01 03:18:48 crc kubenswrapper[4880]: I1201 03:18:48.095103 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d0d4ce3-3730-423f-89a7-8190f9275a00","Type":"ContainerStarted","Data":"84b8cfc0fc07983df6854f0a0a0679e2aa5d412ca38ab8d5a0a0d712ceebd2b0"} Dec 01 03:18:48 crc kubenswrapper[4880]: I1201 03:18:48.095662 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 01 03:18:48 crc kubenswrapper[4880]: I1201 03:18:48.186728 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.186713089 podStartE2EDuration="38.186713089s" podCreationTimestamp="2025-12-01 03:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:18:48.17611771 +0000 UTC m=+1357.687372092" watchObservedRunningTime="2025-12-01 03:18:48.186713089 +0000 UTC m=+1357.697967461" Dec 01 03:18:49 crc kubenswrapper[4880]: I1201 03:18:49.114583 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f49c9908-451f-43c0-a143-8269153c4a4f","Type":"ContainerStarted","Data":"064070b959229c0fec6947e747c307d4cd586b76bb0e031e02d2a13b505cb062"} Dec 01 03:18:49 crc kubenswrapper[4880]: I1201 03:18:49.115430 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:18:49 crc kubenswrapper[4880]: I1201 03:18:49.161603 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.161565039 podStartE2EDuration="38.161565039s" podCreationTimestamp="2025-12-01 03:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:18:49.150998651 +0000 UTC m=+1358.662253023" watchObservedRunningTime="2025-12-01 03:18:49.161565039 +0000 UTC m=+1358.672819411" Dec 01 03:18:59 crc kubenswrapper[4880]: I1201 03:18:59.203956 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" event={"ID":"a8c811e3-0ac3-491c-9d43-7b046f92f669","Type":"ContainerStarted","Data":"643cdb9a03253fb9a7c58751f58e18d366fc5c9cc3abea7d722ce451cdcfc781"} Dec 01 03:18:59 crc kubenswrapper[4880]: I1201 03:18:59.239985 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" podStartSLOduration=2.436177539 podStartE2EDuration="14.239964374s" podCreationTimestamp="2025-12-01 03:18:45 +0000 UTC" firstStartedPulling="2025-12-01 03:18:46.398347436 +0000 UTC m=+1355.909601818" lastFinishedPulling="2025-12-01 03:18:58.202134281 +0000 UTC m=+1367.713388653" observedRunningTime="2025-12-01 03:18:59.225348706 +0000 UTC m=+1368.736603078" watchObservedRunningTime="2025-12-01 03:18:59.239964374 +0000 UTC m=+1368.751218746" Dec 01 03:19:01 crc kubenswrapper[4880]: I1201 03:19:01.432093 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 01 03:19:02 crc kubenswrapper[4880]: I1201 03:19:02.345035 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 01 03:19:10 crc kubenswrapper[4880]: I1201 03:19:10.326529 4880 generic.go:334] "Generic (PLEG): container finished" podID="a8c811e3-0ac3-491c-9d43-7b046f92f669" containerID="643cdb9a03253fb9a7c58751f58e18d366fc5c9cc3abea7d722ce451cdcfc781" exitCode=0 Dec 01 03:19:10 crc kubenswrapper[4880]: I1201 03:19:10.326663 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" event={"ID":"a8c811e3-0ac3-491c-9d43-7b046f92f669","Type":"ContainerDied","Data":"643cdb9a03253fb9a7c58751f58e18d366fc5c9cc3abea7d722ce451cdcfc781"} Dec 01 03:19:11 crc kubenswrapper[4880]: I1201 03:19:11.816027 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:19:11 crc kubenswrapper[4880]: I1201 03:19:11.917322 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th5dz\" (UniqueName: \"kubernetes.io/projected/a8c811e3-0ac3-491c-9d43-7b046f92f669-kube-api-access-th5dz\") pod \"a8c811e3-0ac3-491c-9d43-7b046f92f669\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " Dec 01 03:19:11 crc kubenswrapper[4880]: I1201 03:19:11.917646 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-inventory\") pod \"a8c811e3-0ac3-491c-9d43-7b046f92f669\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " Dec 01 03:19:11 crc kubenswrapper[4880]: I1201 03:19:11.917759 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-repo-setup-combined-ca-bundle\") pod \"a8c811e3-0ac3-491c-9d43-7b046f92f669\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " Dec 01 03:19:11 crc kubenswrapper[4880]: I1201 03:19:11.918006 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-ssh-key\") pod \"a8c811e3-0ac3-491c-9d43-7b046f92f669\" (UID: \"a8c811e3-0ac3-491c-9d43-7b046f92f669\") " Dec 01 03:19:11 crc kubenswrapper[4880]: I1201 03:19:11.928013 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c811e3-0ac3-491c-9d43-7b046f92f669-kube-api-access-th5dz" (OuterVolumeSpecName: "kube-api-access-th5dz") pod "a8c811e3-0ac3-491c-9d43-7b046f92f669" (UID: "a8c811e3-0ac3-491c-9d43-7b046f92f669"). InnerVolumeSpecName "kube-api-access-th5dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:19:11 crc kubenswrapper[4880]: I1201 03:19:11.931208 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a8c811e3-0ac3-491c-9d43-7b046f92f669" (UID: "a8c811e3-0ac3-491c-9d43-7b046f92f669"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:19:11 crc kubenswrapper[4880]: I1201 03:19:11.957968 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a8c811e3-0ac3-491c-9d43-7b046f92f669" (UID: "a8c811e3-0ac3-491c-9d43-7b046f92f669"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:19:11 crc kubenswrapper[4880]: I1201 03:19:11.980020 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-inventory" (OuterVolumeSpecName: "inventory") pod "a8c811e3-0ac3-491c-9d43-7b046f92f669" (UID: "a8c811e3-0ac3-491c-9d43-7b046f92f669"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.020584 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th5dz\" (UniqueName: \"kubernetes.io/projected/a8c811e3-0ac3-491c-9d43-7b046f92f669-kube-api-access-th5dz\") on node \"crc\" DevicePath \"\"" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.020610 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.020619 4880 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.020630 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8c811e3-0ac3-491c-9d43-7b046f92f669-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.354479 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" event={"ID":"a8c811e3-0ac3-491c-9d43-7b046f92f669","Type":"ContainerDied","Data":"c1e84cc31f7adf275d29470f7a7d5662bdb4f057d79e8db5c3c16c108356ff37"} Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.354914 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e84cc31f7adf275d29470f7a7d5662bdb4f057d79e8db5c3c16c108356ff37" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.354730 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-tnvmf" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.545537 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8"] Dec 01 03:19:12 crc kubenswrapper[4880]: E1201 03:19:12.545933 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c811e3-0ac3-491c-9d43-7b046f92f669" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.545949 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c811e3-0ac3-491c-9d43-7b046f92f669" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.546137 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c811e3-0ac3-491c-9d43-7b046f92f669" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.546717 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.551136 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.551242 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.551813 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.555117 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.566239 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8"] Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.632923 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nr5c8\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.633017 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882n4\" (UniqueName: \"kubernetes.io/projected/6a862901-286e-4859-8c9f-265bdf7845f0-kube-api-access-882n4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nr5c8\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.633049 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nr5c8\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.735136 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nr5c8\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.735355 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882n4\" (UniqueName: \"kubernetes.io/projected/6a862901-286e-4859-8c9f-265bdf7845f0-kube-api-access-882n4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nr5c8\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.735410 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nr5c8\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.739810 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nr5c8\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.752854 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nr5c8\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.753736 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882n4\" (UniqueName: \"kubernetes.io/projected/6a862901-286e-4859-8c9f-265bdf7845f0-kube-api-access-882n4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nr5c8\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:12 crc kubenswrapper[4880]: I1201 03:19:12.888998 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:13 crc kubenswrapper[4880]: I1201 03:19:13.505856 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8"] Dec 01 03:19:13 crc kubenswrapper[4880]: W1201 03:19:13.518005 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a862901_286e_4859_8c9f_265bdf7845f0.slice/crio-4a4f37514f960f9159d2c2462017faa95a8a80796f74ba11f2e2da2f0a8c707f WatchSource:0}: Error finding container 4a4f37514f960f9159d2c2462017faa95a8a80796f74ba11f2e2da2f0a8c707f: Status 404 returned error can't find the container with id 4a4f37514f960f9159d2c2462017faa95a8a80796f74ba11f2e2da2f0a8c707f Dec 01 03:19:14 crc kubenswrapper[4880]: I1201 03:19:14.379399 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" event={"ID":"6a862901-286e-4859-8c9f-265bdf7845f0","Type":"ContainerStarted","Data":"05ce8059870125bc3bcbe9df5e63a6afe15ae8097cfcad7f00ec27b32ccf6c8c"} Dec 01 03:19:14 crc kubenswrapper[4880]: I1201 03:19:14.379739 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" event={"ID":"6a862901-286e-4859-8c9f-265bdf7845f0","Type":"ContainerStarted","Data":"4a4f37514f960f9159d2c2462017faa95a8a80796f74ba11f2e2da2f0a8c707f"} Dec 01 03:19:14 crc kubenswrapper[4880]: I1201 03:19:14.405843 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" podStartSLOduration=1.974691676 podStartE2EDuration="2.405824878s" podCreationTimestamp="2025-12-01 03:19:12 +0000 UTC" firstStartedPulling="2025-12-01 03:19:13.52073439 +0000 UTC m=+1383.031988772" lastFinishedPulling="2025-12-01 03:19:13.951867562 +0000 UTC m=+1383.463121974" observedRunningTime="2025-12-01 03:19:14.400125432 +0000 UTC m=+1383.911379844" watchObservedRunningTime="2025-12-01 03:19:14.405824878 +0000 UTC m=+1383.917079260" Dec 01 03:19:17 crc kubenswrapper[4880]: I1201 03:19:17.420482 4880 generic.go:334] "Generic (PLEG): container finished" podID="6a862901-286e-4859-8c9f-265bdf7845f0" containerID="05ce8059870125bc3bcbe9df5e63a6afe15ae8097cfcad7f00ec27b32ccf6c8c" exitCode=0 Dec 01 03:19:17 crc kubenswrapper[4880]: I1201 03:19:17.420588 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" event={"ID":"6a862901-286e-4859-8c9f-265bdf7845f0","Type":"ContainerDied","Data":"05ce8059870125bc3bcbe9df5e63a6afe15ae8097cfcad7f00ec27b32ccf6c8c"} Dec 01 03:19:18 crc kubenswrapper[4880]: I1201 03:19:18.916987 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:18 crc kubenswrapper[4880]: I1201 03:19:18.960126 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-inventory\") pod \"6a862901-286e-4859-8c9f-265bdf7845f0\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " Dec 01 03:19:18 crc kubenswrapper[4880]: I1201 03:19:18.960321 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-882n4\" (UniqueName: \"kubernetes.io/projected/6a862901-286e-4859-8c9f-265bdf7845f0-kube-api-access-882n4\") pod \"6a862901-286e-4859-8c9f-265bdf7845f0\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " Dec 01 03:19:18 crc kubenswrapper[4880]: I1201 03:19:18.960430 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-ssh-key\") pod \"6a862901-286e-4859-8c9f-265bdf7845f0\" (UID: \"6a862901-286e-4859-8c9f-265bdf7845f0\") " Dec 01 03:19:18 crc kubenswrapper[4880]: I1201 03:19:18.974223 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a862901-286e-4859-8c9f-265bdf7845f0-kube-api-access-882n4" (OuterVolumeSpecName: "kube-api-access-882n4") pod "6a862901-286e-4859-8c9f-265bdf7845f0" (UID: "6a862901-286e-4859-8c9f-265bdf7845f0"). InnerVolumeSpecName "kube-api-access-882n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:19:18 crc kubenswrapper[4880]: I1201 03:19:18.987730 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6a862901-286e-4859-8c9f-265bdf7845f0" (UID: "6a862901-286e-4859-8c9f-265bdf7845f0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.013314 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-inventory" (OuterVolumeSpecName: "inventory") pod "6a862901-286e-4859-8c9f-265bdf7845f0" (UID: "6a862901-286e-4859-8c9f-265bdf7845f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.063360 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-882n4\" (UniqueName: \"kubernetes.io/projected/6a862901-286e-4859-8c9f-265bdf7845f0-kube-api-access-882n4\") on node \"crc\" DevicePath \"\"" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.063422 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.063443 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a862901-286e-4859-8c9f-265bdf7845f0-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.455264 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" event={"ID":"6a862901-286e-4859-8c9f-265bdf7845f0","Type":"ContainerDied","Data":"4a4f37514f960f9159d2c2462017faa95a8a80796f74ba11f2e2da2f0a8c707f"} Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.455603 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a4f37514f960f9159d2c2462017faa95a8a80796f74ba11f2e2da2f0a8c707f" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.455366 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nr5c8" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.536695 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb"] Dec 01 03:19:19 crc kubenswrapper[4880]: E1201 03:19:19.538046 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a862901-286e-4859-8c9f-265bdf7845f0" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.538071 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a862901-286e-4859-8c9f-265bdf7845f0" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.539726 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a862901-286e-4859-8c9f-265bdf7845f0" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.546011 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.548764 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.550015 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.550350 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.550560 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.566901 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb"] Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.683049 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.683161 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.683218 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.683245 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjtlr\" (UniqueName: \"kubernetes.io/projected/c984183e-550c-4212-bbb1-daa09dc6ea4e-kube-api-access-vjtlr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.785017 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.785333 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.785524 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.785595 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjtlr\" (UniqueName: \"kubernetes.io/projected/c984183e-550c-4212-bbb1-daa09dc6ea4e-kube-api-access-vjtlr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.790325 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.790373 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.798543 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.814654 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjtlr\" (UniqueName: \"kubernetes.io/projected/c984183e-550c-4212-bbb1-daa09dc6ea4e-kube-api-access-vjtlr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:19 crc kubenswrapper[4880]: I1201 03:19:19.875990 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:19:20 crc kubenswrapper[4880]: I1201 03:19:20.438209 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb"] Dec 01 03:19:20 crc kubenswrapper[4880]: I1201 03:19:20.466337 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" event={"ID":"c984183e-550c-4212-bbb1-daa09dc6ea4e","Type":"ContainerStarted","Data":"70099117ecccdcdf0cb6d3ff0800139218acc958233ef44e784c3d1b7c8d1d3a"} Dec 01 03:19:21 crc kubenswrapper[4880]: I1201 03:19:21.481644 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" event={"ID":"c984183e-550c-4212-bbb1-daa09dc6ea4e","Type":"ContainerStarted","Data":"8f72425f85e58d7e7e0cdda22bccc6de62e2a4ab76bbc8fda0c0e17ddc26a92c"} Dec 01 03:19:21 crc kubenswrapper[4880]: I1201 03:19:21.504905 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" podStartSLOduration=2.037802085 podStartE2EDuration="2.504890472s" podCreationTimestamp="2025-12-01 03:19:19 +0000 UTC" firstStartedPulling="2025-12-01 03:19:20.451059969 +0000 UTC m=+1389.962314351" lastFinishedPulling="2025-12-01 03:19:20.918148326 +0000 UTC m=+1390.429402738" observedRunningTime="2025-12-01 03:19:21.501934002 +0000 UTC m=+1391.013188374" watchObservedRunningTime="2025-12-01 03:19:21.504890472 +0000 UTC m=+1391.016144844" Dec 01 03:19:28 crc kubenswrapper[4880]: I1201 03:19:28.434846 4880 scope.go:117] "RemoveContainer" containerID="e7e4b48f5e9544abff424672ecdf08ae9130b7622b111ec3da8e82947db4c622" Dec 01 03:20:28 crc kubenswrapper[4880]: I1201 03:20:28.575222 4880 scope.go:117] "RemoveContainer" containerID="fbaad87f97383177ef74287211bef8bf252dc80c0984dd3baf220229501ec8f0" Dec 01 03:20:47 crc kubenswrapper[4880]: I1201 03:20:47.369057 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:20:47 crc kubenswrapper[4880]: I1201 03:20:47.369633 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.424516 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wrwrw"] Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.430468 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.437961 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrwrw"] Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.563422 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khhm7\" (UniqueName: \"kubernetes.io/projected/b3b94139-3a15-4d42-b669-ed0b114870a9-kube-api-access-khhm7\") pod \"community-operators-wrwrw\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.563471 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-utilities\") pod \"community-operators-wrwrw\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.563596 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-catalog-content\") pod \"community-operators-wrwrw\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.664923 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-catalog-content\") pod \"community-operators-wrwrw\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.664991 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khhm7\" (UniqueName: \"kubernetes.io/projected/b3b94139-3a15-4d42-b669-ed0b114870a9-kube-api-access-khhm7\") pod \"community-operators-wrwrw\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.665015 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-utilities\") pod \"community-operators-wrwrw\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.665739 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-utilities\") pod \"community-operators-wrwrw\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.665839 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-catalog-content\") pod \"community-operators-wrwrw\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.683069 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khhm7\" (UniqueName: \"kubernetes.io/projected/b3b94139-3a15-4d42-b669-ed0b114870a9-kube-api-access-khhm7\") pod \"community-operators-wrwrw\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:09 crc kubenswrapper[4880]: I1201 03:21:09.766426 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:10 crc kubenswrapper[4880]: I1201 03:21:10.227945 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrwrw"] Dec 01 03:21:10 crc kubenswrapper[4880]: I1201 03:21:10.864626 4880 generic.go:334] "Generic (PLEG): container finished" podID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerID="c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770" exitCode=0 Dec 01 03:21:10 crc kubenswrapper[4880]: I1201 03:21:10.864912 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrwrw" event={"ID":"b3b94139-3a15-4d42-b669-ed0b114870a9","Type":"ContainerDied","Data":"c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770"} Dec 01 03:21:10 crc kubenswrapper[4880]: I1201 03:21:10.864939 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrwrw" event={"ID":"b3b94139-3a15-4d42-b669-ed0b114870a9","Type":"ContainerStarted","Data":"58688dbf0469fbb6d26cdd719cfee6b25ca42373076c17b744e2d4f8fc359d6f"} Dec 01 03:21:11 crc kubenswrapper[4880]: I1201 03:21:11.876058 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrwrw" event={"ID":"b3b94139-3a15-4d42-b669-ed0b114870a9","Type":"ContainerStarted","Data":"008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645"} Dec 01 03:21:12 crc kubenswrapper[4880]: I1201 03:21:12.885596 4880 generic.go:334] "Generic (PLEG): container finished" podID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerID="008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645" exitCode=0 Dec 01 03:21:12 crc kubenswrapper[4880]: I1201 03:21:12.886443 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrwrw" event={"ID":"b3b94139-3a15-4d42-b669-ed0b114870a9","Type":"ContainerDied","Data":"008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645"} Dec 01 03:21:15 crc kubenswrapper[4880]: I1201 03:21:15.936035 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrwrw" event={"ID":"b3b94139-3a15-4d42-b669-ed0b114870a9","Type":"ContainerStarted","Data":"bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656"} Dec 01 03:21:15 crc kubenswrapper[4880]: I1201 03:21:15.958350 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wrwrw" podStartSLOduration=2.239514223 podStartE2EDuration="6.958332472s" podCreationTimestamp="2025-12-01 03:21:09 +0000 UTC" firstStartedPulling="2025-12-01 03:21:10.868727758 +0000 UTC m=+1500.379982130" lastFinishedPulling="2025-12-01 03:21:15.587545977 +0000 UTC m=+1505.098800379" observedRunningTime="2025-12-01 03:21:15.955932685 +0000 UTC m=+1505.467187097" watchObservedRunningTime="2025-12-01 03:21:15.958332472 +0000 UTC m=+1505.469586844" Dec 01 03:21:17 crc kubenswrapper[4880]: I1201 03:21:17.369346 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:21:17 crc kubenswrapper[4880]: I1201 03:21:17.370762 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:21:19 crc kubenswrapper[4880]: I1201 03:21:19.767466 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:19 crc kubenswrapper[4880]: I1201 03:21:19.767769 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:19 crc kubenswrapper[4880]: I1201 03:21:19.823687 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:20 crc kubenswrapper[4880]: I1201 03:21:20.093471 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:20 crc kubenswrapper[4880]: I1201 03:21:20.157289 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrwrw"] Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.030003 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wrwrw" podUID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerName="registry-server" containerID="cri-o://bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656" gracePeriod=2 Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.498964 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.635311 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khhm7\" (UniqueName: \"kubernetes.io/projected/b3b94139-3a15-4d42-b669-ed0b114870a9-kube-api-access-khhm7\") pod \"b3b94139-3a15-4d42-b669-ed0b114870a9\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.635454 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-utilities\") pod \"b3b94139-3a15-4d42-b669-ed0b114870a9\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.635496 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-catalog-content\") pod \"b3b94139-3a15-4d42-b669-ed0b114870a9\" (UID: \"b3b94139-3a15-4d42-b669-ed0b114870a9\") " Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.636411 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-utilities" (OuterVolumeSpecName: "utilities") pod "b3b94139-3a15-4d42-b669-ed0b114870a9" (UID: "b3b94139-3a15-4d42-b669-ed0b114870a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.658723 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b94139-3a15-4d42-b669-ed0b114870a9-kube-api-access-khhm7" (OuterVolumeSpecName: "kube-api-access-khhm7") pod "b3b94139-3a15-4d42-b669-ed0b114870a9" (UID: "b3b94139-3a15-4d42-b669-ed0b114870a9"). InnerVolumeSpecName "kube-api-access-khhm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.706860 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3b94139-3a15-4d42-b669-ed0b114870a9" (UID: "b3b94139-3a15-4d42-b669-ed0b114870a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.737891 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.737922 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3b94139-3a15-4d42-b669-ed0b114870a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:21:22 crc kubenswrapper[4880]: I1201 03:21:22.737934 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khhm7\" (UniqueName: \"kubernetes.io/projected/b3b94139-3a15-4d42-b669-ed0b114870a9-kube-api-access-khhm7\") on node \"crc\" DevicePath \"\"" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.042555 4880 generic.go:334] "Generic (PLEG): container finished" podID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerID="bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656" exitCode=0 Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.042617 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrwrw" event={"ID":"b3b94139-3a15-4d42-b669-ed0b114870a9","Type":"ContainerDied","Data":"bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656"} Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.042653 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrwrw" event={"ID":"b3b94139-3a15-4d42-b669-ed0b114870a9","Type":"ContainerDied","Data":"58688dbf0469fbb6d26cdd719cfee6b25ca42373076c17b744e2d4f8fc359d6f"} Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.042710 4880 scope.go:117] "RemoveContainer" containerID="bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.042967 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrwrw" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.061007 4880 scope.go:117] "RemoveContainer" containerID="008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.078091 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrwrw"] Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.086894 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wrwrw"] Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.116285 4880 scope.go:117] "RemoveContainer" containerID="c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.140092 4880 scope.go:117] "RemoveContainer" containerID="bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656" Dec 01 03:21:23 crc kubenswrapper[4880]: E1201 03:21:23.140482 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656\": container with ID starting with bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656 not found: ID does not exist" containerID="bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.140527 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656"} err="failed to get container status \"bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656\": rpc error: code = NotFound desc = could not find container \"bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656\": container with ID starting with bdd94f307a15db525de043b89830ef187e5e9b219b26630424a5c613662af656 not found: ID does not exist" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.140561 4880 scope.go:117] "RemoveContainer" containerID="008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645" Dec 01 03:21:23 crc kubenswrapper[4880]: E1201 03:21:23.140854 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645\": container with ID starting with 008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645 not found: ID does not exist" containerID="008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.140920 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645"} err="failed to get container status \"008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645\": rpc error: code = NotFound desc = could not find container \"008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645\": container with ID starting with 008a81ebb787ecf8c7fa7fc8a140a3311678a5039e36b83e49bd77a426b73645 not found: ID does not exist" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.140949 4880 scope.go:117] "RemoveContainer" containerID="c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770" Dec 01 03:21:23 crc kubenswrapper[4880]: E1201 03:21:23.141148 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770\": container with ID starting with c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770 not found: ID does not exist" containerID="c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770" Dec 01 03:21:23 crc kubenswrapper[4880]: I1201 03:21:23.141176 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770"} err="failed to get container status \"c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770\": rpc error: code = NotFound desc = could not find container \"c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770\": container with ID starting with c8d124abed6675e23f684ebbb9af7febfaa0cd3a49bb3517a80a102402fe7770 not found: ID does not exist" Dec 01 03:21:24 crc kubenswrapper[4880]: I1201 03:21:24.799154 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b94139-3a15-4d42-b669-ed0b114870a9" path="/var/lib/kubelet/pods/b3b94139-3a15-4d42-b669-ed0b114870a9/volumes" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.093526 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rcwns"] Dec 01 03:21:25 crc kubenswrapper[4880]: E1201 03:21:25.094381 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerName="registry-server" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.094406 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerName="registry-server" Dec 01 03:21:25 crc kubenswrapper[4880]: E1201 03:21:25.094448 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerName="extract-utilities" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.094459 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerName="extract-utilities" Dec 01 03:21:25 crc kubenswrapper[4880]: E1201 03:21:25.094470 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerName="extract-content" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.094478 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerName="extract-content" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.094744 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b94139-3a15-4d42-b669-ed0b114870a9" containerName="registry-server" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.096799 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.119066 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rcwns"] Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.186048 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-utilities\") pod \"redhat-operators-rcwns\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.186499 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-catalog-content\") pod \"redhat-operators-rcwns\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.186663 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4s4f\" (UniqueName: \"kubernetes.io/projected/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-kube-api-access-x4s4f\") pod \"redhat-operators-rcwns\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.288274 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-catalog-content\") pod \"redhat-operators-rcwns\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.288485 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4s4f\" (UniqueName: \"kubernetes.io/projected/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-kube-api-access-x4s4f\") pod \"redhat-operators-rcwns\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.288621 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-utilities\") pod \"redhat-operators-rcwns\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.289607 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-catalog-content\") pod \"redhat-operators-rcwns\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.289682 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-utilities\") pod \"redhat-operators-rcwns\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.313618 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4s4f\" (UniqueName: \"kubernetes.io/projected/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-kube-api-access-x4s4f\") pod \"redhat-operators-rcwns\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.418751 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:25 crc kubenswrapper[4880]: I1201 03:21:25.946478 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rcwns"] Dec 01 03:21:26 crc kubenswrapper[4880]: I1201 03:21:26.079566 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwns" event={"ID":"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8","Type":"ContainerStarted","Data":"04e60e3c1adf8097ed9868bd59fe832ee5e6766dcbef5adfb7182218ee33a0f6"} Dec 01 03:21:27 crc kubenswrapper[4880]: I1201 03:21:27.123213 4880 generic.go:334] "Generic (PLEG): container finished" podID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerID="d1d5a3eb59632de9c8ddcee1a4c74b07d16324448f664508caaf00b6e98c9281" exitCode=0 Dec 01 03:21:27 crc kubenswrapper[4880]: I1201 03:21:27.123584 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwns" event={"ID":"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8","Type":"ContainerDied","Data":"d1d5a3eb59632de9c8ddcee1a4c74b07d16324448f664508caaf00b6e98c9281"} Dec 01 03:21:28 crc kubenswrapper[4880]: I1201 03:21:28.133567 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwns" event={"ID":"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8","Type":"ContainerStarted","Data":"9fa0cd6d7d365ca54be6aa8349e301edba2cd5558e70d139999da7b5ff83bc3f"} Dec 01 03:21:28 crc kubenswrapper[4880]: I1201 03:21:28.683965 4880 scope.go:117] "RemoveContainer" containerID="21b2513810dd23a24b382d31036fdfbcee15b516f1d10507dbb2103e03583774" Dec 01 03:21:28 crc kubenswrapper[4880]: I1201 03:21:28.716083 4880 scope.go:117] "RemoveContainer" containerID="f46f883f710447f5f943bc294873f65548fef128288cc4ea8765587e339070c7" Dec 01 03:21:31 crc kubenswrapper[4880]: I1201 03:21:31.163510 4880 generic.go:334] "Generic (PLEG): container finished" podID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerID="9fa0cd6d7d365ca54be6aa8349e301edba2cd5558e70d139999da7b5ff83bc3f" exitCode=0 Dec 01 03:21:31 crc kubenswrapper[4880]: I1201 03:21:31.163766 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwns" event={"ID":"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8","Type":"ContainerDied","Data":"9fa0cd6d7d365ca54be6aa8349e301edba2cd5558e70d139999da7b5ff83bc3f"} Dec 01 03:21:32 crc kubenswrapper[4880]: I1201 03:21:32.181192 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwns" event={"ID":"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8","Type":"ContainerStarted","Data":"6f1242b8db2aaff2bbea1d21de4fa7ec90a201ad78a6a26a11d286e965b58a41"} Dec 01 03:21:32 crc kubenswrapper[4880]: I1201 03:21:32.204554 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rcwns" podStartSLOduration=2.712805387 podStartE2EDuration="7.20453561s" podCreationTimestamp="2025-12-01 03:21:25 +0000 UTC" firstStartedPulling="2025-12-01 03:21:27.130012855 +0000 UTC m=+1516.641267227" lastFinishedPulling="2025-12-01 03:21:31.621743048 +0000 UTC m=+1521.132997450" observedRunningTime="2025-12-01 03:21:32.196655892 +0000 UTC m=+1521.707910294" watchObservedRunningTime="2025-12-01 03:21:32.20453561 +0000 UTC m=+1521.715789982" Dec 01 03:21:35 crc kubenswrapper[4880]: I1201 03:21:35.419691 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:35 crc kubenswrapper[4880]: I1201 03:21:35.419985 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:36 crc kubenswrapper[4880]: I1201 03:21:36.489360 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rcwns" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerName="registry-server" probeResult="failure" output=< Dec 01 03:21:36 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 03:21:36 crc kubenswrapper[4880]: > Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.061949 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f8trt"] Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.065140 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.078210 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8trt"] Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.217205 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8lm\" (UniqueName: \"kubernetes.io/projected/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-kube-api-access-2r8lm\") pod \"certified-operators-f8trt\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.217327 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-catalog-content\") pod \"certified-operators-f8trt\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.217355 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-utilities\") pod \"certified-operators-f8trt\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.319020 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r8lm\" (UniqueName: \"kubernetes.io/projected/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-kube-api-access-2r8lm\") pod \"certified-operators-f8trt\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.319225 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-catalog-content\") pod \"certified-operators-f8trt\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.319277 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-utilities\") pod \"certified-operators-f8trt\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.320093 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-utilities\") pod \"certified-operators-f8trt\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.320099 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-catalog-content\") pod \"certified-operators-f8trt\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.338643 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r8lm\" (UniqueName: \"kubernetes.io/projected/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-kube-api-access-2r8lm\") pod \"certified-operators-f8trt\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.424555 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:44 crc kubenswrapper[4880]: I1201 03:21:44.856743 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8trt"] Dec 01 03:21:45 crc kubenswrapper[4880]: I1201 03:21:45.409059 4880 generic.go:334] "Generic (PLEG): container finished" podID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerID="6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5" exitCode=0 Dec 01 03:21:45 crc kubenswrapper[4880]: I1201 03:21:45.409189 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8trt" event={"ID":"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122","Type":"ContainerDied","Data":"6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5"} Dec 01 03:21:45 crc kubenswrapper[4880]: I1201 03:21:45.409407 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8trt" event={"ID":"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122","Type":"ContainerStarted","Data":"7511bda01f50448c4f4e36369de28244fd163c7e43299699427d787cea1d7358"} Dec 01 03:21:45 crc kubenswrapper[4880]: I1201 03:21:45.483097 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:45 crc kubenswrapper[4880]: I1201 03:21:45.539265 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:46 crc kubenswrapper[4880]: I1201 03:21:46.438383 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8trt" event={"ID":"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122","Type":"ContainerStarted","Data":"a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6"} Dec 01 03:21:47 crc kubenswrapper[4880]: I1201 03:21:47.368852 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:21:47 crc kubenswrapper[4880]: I1201 03:21:47.369456 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:21:47 crc kubenswrapper[4880]: I1201 03:21:47.369568 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:21:47 crc kubenswrapper[4880]: I1201 03:21:47.370343 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:21:47 crc kubenswrapper[4880]: I1201 03:21:47.370466 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" gracePeriod=600 Dec 01 03:21:47 crc kubenswrapper[4880]: I1201 03:21:47.457062 4880 generic.go:334] "Generic (PLEG): container finished" podID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerID="a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6" exitCode=0 Dec 01 03:21:47 crc kubenswrapper[4880]: I1201 03:21:47.457129 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8trt" event={"ID":"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122","Type":"ContainerDied","Data":"a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6"} Dec 01 03:21:47 crc kubenswrapper[4880]: I1201 03:21:47.831468 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rcwns"] Dec 01 03:21:47 crc kubenswrapper[4880]: I1201 03:21:47.831684 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rcwns" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerName="registry-server" containerID="cri-o://6f1242b8db2aaff2bbea1d21de4fa7ec90a201ad78a6a26a11d286e965b58a41" gracePeriod=2 Dec 01 03:21:48 crc kubenswrapper[4880]: I1201 03:21:48.470201 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" exitCode=0 Dec 01 03:21:48 crc kubenswrapper[4880]: I1201 03:21:48.470301 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4"} Dec 01 03:21:48 crc kubenswrapper[4880]: I1201 03:21:48.470681 4880 scope.go:117] "RemoveContainer" containerID="8fa73d5a87af237b0d0a9c3f24f3c3545af69a32a8108a4ef1e39e8382145766" Dec 01 03:21:48 crc kubenswrapper[4880]: I1201 03:21:48.474899 4880 generic.go:334] "Generic (PLEG): container finished" podID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerID="6f1242b8db2aaff2bbea1d21de4fa7ec90a201ad78a6a26a11d286e965b58a41" exitCode=0 Dec 01 03:21:48 crc kubenswrapper[4880]: I1201 03:21:48.474928 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwns" event={"ID":"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8","Type":"ContainerDied","Data":"6f1242b8db2aaff2bbea1d21de4fa7ec90a201ad78a6a26a11d286e965b58a41"} Dec 01 03:21:48 crc kubenswrapper[4880]: I1201 03:21:48.480407 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8trt" event={"ID":"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122","Type":"ContainerStarted","Data":"5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba"} Dec 01 03:21:48 crc kubenswrapper[4880]: I1201 03:21:48.513152 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f8trt" podStartSLOduration=1.83297672 podStartE2EDuration="4.513131085s" podCreationTimestamp="2025-12-01 03:21:44 +0000 UTC" firstStartedPulling="2025-12-01 03:21:45.411563469 +0000 UTC m=+1534.922817831" lastFinishedPulling="2025-12-01 03:21:48.091717824 +0000 UTC m=+1537.602972196" observedRunningTime="2025-12-01 03:21:48.499837188 +0000 UTC m=+1538.011091590" watchObservedRunningTime="2025-12-01 03:21:48.513131085 +0000 UTC m=+1538.024385467" Dec 01 03:21:48 crc kubenswrapper[4880]: E1201 03:21:48.600108 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:21:48 crc kubenswrapper[4880]: I1201 03:21:48.956134 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.125190 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4s4f\" (UniqueName: \"kubernetes.io/projected/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-kube-api-access-x4s4f\") pod \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.125527 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-catalog-content\") pod \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.125609 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-utilities\") pod \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\" (UID: \"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8\") " Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.126455 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-utilities" (OuterVolumeSpecName: "utilities") pod "e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" (UID: "e78cbf6d-a100-4e70-ba3a-29b1159c6cf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.130972 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-kube-api-access-x4s4f" (OuterVolumeSpecName: "kube-api-access-x4s4f") pod "e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" (UID: "e78cbf6d-a100-4e70-ba3a-29b1159c6cf8"). InnerVolumeSpecName "kube-api-access-x4s4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.228441 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4s4f\" (UniqueName: \"kubernetes.io/projected/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-kube-api-access-x4s4f\") on node \"crc\" DevicePath \"\"" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.228477 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.233807 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" (UID: "e78cbf6d-a100-4e70-ba3a-29b1159c6cf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.330712 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.493114 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:21:49 crc kubenswrapper[4880]: E1201 03:21:49.493429 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.494840 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcwns" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.494832 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwns" event={"ID":"e78cbf6d-a100-4e70-ba3a-29b1159c6cf8","Type":"ContainerDied","Data":"04e60e3c1adf8097ed9868bd59fe832ee5e6766dcbef5adfb7182218ee33a0f6"} Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.494981 4880 scope.go:117] "RemoveContainer" containerID="6f1242b8db2aaff2bbea1d21de4fa7ec90a201ad78a6a26a11d286e965b58a41" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.522018 4880 scope.go:117] "RemoveContainer" containerID="9fa0cd6d7d365ca54be6aa8349e301edba2cd5558e70d139999da7b5ff83bc3f" Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.548056 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rcwns"] Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.556926 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rcwns"] Dec 01 03:21:49 crc kubenswrapper[4880]: I1201 03:21:49.561176 4880 scope.go:117] "RemoveContainer" containerID="d1d5a3eb59632de9c8ddcee1a4c74b07d16324448f664508caaf00b6e98c9281" Dec 01 03:21:50 crc kubenswrapper[4880]: I1201 03:21:50.799860 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" path="/var/lib/kubelet/pods/e78cbf6d-a100-4e70-ba3a-29b1159c6cf8/volumes" Dec 01 03:21:54 crc kubenswrapper[4880]: I1201 03:21:54.424988 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:54 crc kubenswrapper[4880]: I1201 03:21:54.425055 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:54 crc kubenswrapper[4880]: I1201 03:21:54.495397 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:54 crc kubenswrapper[4880]: I1201 03:21:54.629725 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:54 crc kubenswrapper[4880]: I1201 03:21:54.740169 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8trt"] Dec 01 03:21:56 crc kubenswrapper[4880]: I1201 03:21:56.583108 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f8trt" podUID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerName="registry-server" containerID="cri-o://5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba" gracePeriod=2 Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.079606 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.201008 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-utilities\") pod \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.201124 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-catalog-content\") pod \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.201200 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r8lm\" (UniqueName: \"kubernetes.io/projected/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-kube-api-access-2r8lm\") pod \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\" (UID: \"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122\") " Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.201597 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-utilities" (OuterVolumeSpecName: "utilities") pod "f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" (UID: "f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.201692 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.211201 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-kube-api-access-2r8lm" (OuterVolumeSpecName: "kube-api-access-2r8lm") pod "f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" (UID: "f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122"). InnerVolumeSpecName "kube-api-access-2r8lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.259479 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" (UID: "f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.303614 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r8lm\" (UniqueName: \"kubernetes.io/projected/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-kube-api-access-2r8lm\") on node \"crc\" DevicePath \"\"" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.303890 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.597273 4880 generic.go:334] "Generic (PLEG): container finished" podID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerID="5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba" exitCode=0 Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.597342 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8trt" event={"ID":"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122","Type":"ContainerDied","Data":"5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba"} Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.597379 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8trt" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.597408 4880 scope.go:117] "RemoveContainer" containerID="5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.597390 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8trt" event={"ID":"f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122","Type":"ContainerDied","Data":"7511bda01f50448c4f4e36369de28244fd163c7e43299699427d787cea1d7358"} Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.622519 4880 scope.go:117] "RemoveContainer" containerID="a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.666597 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8trt"] Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.673672 4880 scope.go:117] "RemoveContainer" containerID="6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.682244 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f8trt"] Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.721418 4880 scope.go:117] "RemoveContainer" containerID="5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba" Dec 01 03:21:57 crc kubenswrapper[4880]: E1201 03:21:57.722599 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba\": container with ID starting with 5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba not found: ID does not exist" containerID="5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.722648 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba"} err="failed to get container status \"5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba\": rpc error: code = NotFound desc = could not find container \"5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba\": container with ID starting with 5c04a29e8e43654fee30be33d6474f12fe9f7c4091c4113ea6b0af28e73f47ba not found: ID does not exist" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.722678 4880 scope.go:117] "RemoveContainer" containerID="a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6" Dec 01 03:21:57 crc kubenswrapper[4880]: E1201 03:21:57.723081 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6\": container with ID starting with a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6 not found: ID does not exist" containerID="a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.723109 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6"} err="failed to get container status \"a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6\": rpc error: code = NotFound desc = could not find container \"a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6\": container with ID starting with a2c4639943cd7e9a94e289645744dae05387b36b41a9f3869fc8192b4ad025e6 not found: ID does not exist" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.723127 4880 scope.go:117] "RemoveContainer" containerID="6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5" Dec 01 03:21:57 crc kubenswrapper[4880]: E1201 03:21:57.723442 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5\": container with ID starting with 6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5 not found: ID does not exist" containerID="6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5" Dec 01 03:21:57 crc kubenswrapper[4880]: I1201 03:21:57.723466 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5"} err="failed to get container status \"6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5\": rpc error: code = NotFound desc = could not find container \"6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5\": container with ID starting with 6e87ccaaa25e0d352ce730ec467a90e21378c67cf6bca7d00baaea30375597b5 not found: ID does not exist" Dec 01 03:21:58 crc kubenswrapper[4880]: I1201 03:21:58.794673 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" path="/var/lib/kubelet/pods/f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122/volumes" Dec 01 03:22:00 crc kubenswrapper[4880]: I1201 03:22:00.800080 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:22:00 crc kubenswrapper[4880]: E1201 03:22:00.801257 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:22:13 crc kubenswrapper[4880]: I1201 03:22:13.784200 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:22:13 crc kubenswrapper[4880]: E1201 03:22:13.785152 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.071671 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ljl9b"] Dec 01 03:22:15 crc kubenswrapper[4880]: E1201 03:22:15.072326 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerName="extract-utilities" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.072337 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerName="extract-utilities" Dec 01 03:22:15 crc kubenswrapper[4880]: E1201 03:22:15.072349 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerName="extract-utilities" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.072355 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerName="extract-utilities" Dec 01 03:22:15 crc kubenswrapper[4880]: E1201 03:22:15.072375 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerName="registry-server" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.072381 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerName="registry-server" Dec 01 03:22:15 crc kubenswrapper[4880]: E1201 03:22:15.072415 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerName="extract-content" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.072420 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerName="extract-content" Dec 01 03:22:15 crc kubenswrapper[4880]: E1201 03:22:15.072431 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerName="registry-server" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.072437 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerName="registry-server" Dec 01 03:22:15 crc kubenswrapper[4880]: E1201 03:22:15.072446 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerName="extract-content" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.072451 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerName="extract-content" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.072629 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5eaf441-5b3e-4a69-a2fb-d68d5fa2e122" containerName="registry-server" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.072647 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78cbf6d-a100-4e70-ba3a-29b1159c6cf8" containerName="registry-server" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.073961 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.105305 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljl9b"] Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.206341 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spk7b\" (UniqueName: \"kubernetes.io/projected/094eb0a2-99ef-4e09-a793-be7b89007efc-kube-api-access-spk7b\") pod \"redhat-marketplace-ljl9b\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.206466 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-catalog-content\") pod \"redhat-marketplace-ljl9b\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.206548 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-utilities\") pod \"redhat-marketplace-ljl9b\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.308466 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spk7b\" (UniqueName: \"kubernetes.io/projected/094eb0a2-99ef-4e09-a793-be7b89007efc-kube-api-access-spk7b\") pod \"redhat-marketplace-ljl9b\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.308617 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-catalog-content\") pod \"redhat-marketplace-ljl9b\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.308713 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-utilities\") pod \"redhat-marketplace-ljl9b\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.309552 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-utilities\") pod \"redhat-marketplace-ljl9b\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.310466 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-catalog-content\") pod \"redhat-marketplace-ljl9b\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.342829 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spk7b\" (UniqueName: \"kubernetes.io/projected/094eb0a2-99ef-4e09-a793-be7b89007efc-kube-api-access-spk7b\") pod \"redhat-marketplace-ljl9b\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.395866 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:15 crc kubenswrapper[4880]: I1201 03:22:15.873175 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljl9b"] Dec 01 03:22:16 crc kubenswrapper[4880]: I1201 03:22:16.842234 4880 generic.go:334] "Generic (PLEG): container finished" podID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerID="0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4" exitCode=0 Dec 01 03:22:16 crc kubenswrapper[4880]: I1201 03:22:16.842542 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljl9b" event={"ID":"094eb0a2-99ef-4e09-a793-be7b89007efc","Type":"ContainerDied","Data":"0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4"} Dec 01 03:22:16 crc kubenswrapper[4880]: I1201 03:22:16.842570 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljl9b" event={"ID":"094eb0a2-99ef-4e09-a793-be7b89007efc","Type":"ContainerStarted","Data":"9d4e8347c353544e81a8361fb1e211a36e428c61b565c5be0792a0a43916ac49"} Dec 01 03:22:16 crc kubenswrapper[4880]: I1201 03:22:16.844336 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 03:22:17 crc kubenswrapper[4880]: I1201 03:22:17.857430 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljl9b" event={"ID":"094eb0a2-99ef-4e09-a793-be7b89007efc","Type":"ContainerStarted","Data":"fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f"} Dec 01 03:22:18 crc kubenswrapper[4880]: I1201 03:22:18.873329 4880 generic.go:334] "Generic (PLEG): container finished" podID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerID="fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f" exitCode=0 Dec 01 03:22:18 crc kubenswrapper[4880]: I1201 03:22:18.873438 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljl9b" event={"ID":"094eb0a2-99ef-4e09-a793-be7b89007efc","Type":"ContainerDied","Data":"fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f"} Dec 01 03:22:19 crc kubenswrapper[4880]: I1201 03:22:19.883200 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljl9b" event={"ID":"094eb0a2-99ef-4e09-a793-be7b89007efc","Type":"ContainerStarted","Data":"a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e"} Dec 01 03:22:19 crc kubenswrapper[4880]: I1201 03:22:19.914558 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ljl9b" podStartSLOduration=2.312102314 podStartE2EDuration="4.914540898s" podCreationTimestamp="2025-12-01 03:22:15 +0000 UTC" firstStartedPulling="2025-12-01 03:22:16.844110715 +0000 UTC m=+1566.355365087" lastFinishedPulling="2025-12-01 03:22:19.446549289 +0000 UTC m=+1568.957803671" observedRunningTime="2025-12-01 03:22:19.907819338 +0000 UTC m=+1569.419073700" watchObservedRunningTime="2025-12-01 03:22:19.914540898 +0000 UTC m=+1569.425795270" Dec 01 03:22:24 crc kubenswrapper[4880]: I1201 03:22:24.784433 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:22:24 crc kubenswrapper[4880]: E1201 03:22:24.785189 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:22:25 crc kubenswrapper[4880]: I1201 03:22:25.396508 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:25 crc kubenswrapper[4880]: I1201 03:22:25.396593 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:25 crc kubenswrapper[4880]: I1201 03:22:25.483739 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:26 crc kubenswrapper[4880]: I1201 03:22:26.043226 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:26 crc kubenswrapper[4880]: I1201 03:22:26.098625 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljl9b"] Dec 01 03:22:27 crc kubenswrapper[4880]: I1201 03:22:27.990397 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ljl9b" podUID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerName="registry-server" containerID="cri-o://a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e" gracePeriod=2 Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.441592 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.525407 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-catalog-content\") pod \"094eb0a2-99ef-4e09-a793-be7b89007efc\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.525479 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spk7b\" (UniqueName: \"kubernetes.io/projected/094eb0a2-99ef-4e09-a793-be7b89007efc-kube-api-access-spk7b\") pod \"094eb0a2-99ef-4e09-a793-be7b89007efc\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.525513 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-utilities\") pod \"094eb0a2-99ef-4e09-a793-be7b89007efc\" (UID: \"094eb0a2-99ef-4e09-a793-be7b89007efc\") " Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.527013 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-utilities" (OuterVolumeSpecName: "utilities") pod "094eb0a2-99ef-4e09-a793-be7b89007efc" (UID: "094eb0a2-99ef-4e09-a793-be7b89007efc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.538742 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/094eb0a2-99ef-4e09-a793-be7b89007efc-kube-api-access-spk7b" (OuterVolumeSpecName: "kube-api-access-spk7b") pod "094eb0a2-99ef-4e09-a793-be7b89007efc" (UID: "094eb0a2-99ef-4e09-a793-be7b89007efc"). InnerVolumeSpecName "kube-api-access-spk7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.562474 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "094eb0a2-99ef-4e09-a793-be7b89007efc" (UID: "094eb0a2-99ef-4e09-a793-be7b89007efc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.627752 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.627792 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spk7b\" (UniqueName: \"kubernetes.io/projected/094eb0a2-99ef-4e09-a793-be7b89007efc-kube-api-access-spk7b\") on node \"crc\" DevicePath \"\"" Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.627803 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/094eb0a2-99ef-4e09-a793-be7b89007efc-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:22:28 crc kubenswrapper[4880]: I1201 03:22:28.918829 4880 scope.go:117] "RemoveContainer" containerID="b4fd6cff0eb7d1c286ab99378092e9ca98740a62ffcc749912c6759b5c63b40c" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.002760 4880 generic.go:334] "Generic (PLEG): container finished" podID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerID="a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e" exitCode=0 Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.002813 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljl9b" event={"ID":"094eb0a2-99ef-4e09-a793-be7b89007efc","Type":"ContainerDied","Data":"a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e"} Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.002848 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljl9b" event={"ID":"094eb0a2-99ef-4e09-a793-be7b89007efc","Type":"ContainerDied","Data":"9d4e8347c353544e81a8361fb1e211a36e428c61b565c5be0792a0a43916ac49"} Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.002889 4880 scope.go:117] "RemoveContainer" containerID="a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.003057 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljl9b" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.025180 4880 scope.go:117] "RemoveContainer" containerID="fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.034762 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljl9b"] Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.050528 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljl9b"] Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.056617 4880 scope.go:117] "RemoveContainer" containerID="0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.088332 4880 scope.go:117] "RemoveContainer" containerID="a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e" Dec 01 03:22:29 crc kubenswrapper[4880]: E1201 03:22:29.088944 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e\": container with ID starting with a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e not found: ID does not exist" containerID="a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.089009 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e"} err="failed to get container status \"a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e\": rpc error: code = NotFound desc = could not find container \"a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e\": container with ID starting with a56c3e68ee9a8d74e7a12b84658e99f78ba0b8e118b6868a2134841ea223055e not found: ID does not exist" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.089045 4880 scope.go:117] "RemoveContainer" containerID="fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f" Dec 01 03:22:29 crc kubenswrapper[4880]: E1201 03:22:29.089492 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f\": container with ID starting with fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f not found: ID does not exist" containerID="fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.089521 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f"} err="failed to get container status \"fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f\": rpc error: code = NotFound desc = could not find container \"fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f\": container with ID starting with fa31af2b49333d66cfea7096ef1b18cb594cfd52419f63f2d8bc04e7e174698f not found: ID does not exist" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.089562 4880 scope.go:117] "RemoveContainer" containerID="0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4" Dec 01 03:22:29 crc kubenswrapper[4880]: E1201 03:22:29.089997 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4\": container with ID starting with 0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4 not found: ID does not exist" containerID="0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4" Dec 01 03:22:29 crc kubenswrapper[4880]: I1201 03:22:29.090026 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4"} err="failed to get container status \"0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4\": rpc error: code = NotFound desc = could not find container \"0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4\": container with ID starting with 0e063f658ee97e52a48fbae4ce4e06ede6482c85e76f2e9058b66723a30614b4 not found: ID does not exist" Dec 01 03:22:30 crc kubenswrapper[4880]: I1201 03:22:30.804353 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="094eb0a2-99ef-4e09-a793-be7b89007efc" path="/var/lib/kubelet/pods/094eb0a2-99ef-4e09-a793-be7b89007efc/volumes" Dec 01 03:22:38 crc kubenswrapper[4880]: I1201 03:22:38.785244 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:22:38 crc kubenswrapper[4880]: E1201 03:22:38.786390 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:22:39 crc kubenswrapper[4880]: I1201 03:22:39.119410 4880 generic.go:334] "Generic (PLEG): container finished" podID="c984183e-550c-4212-bbb1-daa09dc6ea4e" containerID="8f72425f85e58d7e7e0cdda22bccc6de62e2a4ab76bbc8fda0c0e17ddc26a92c" exitCode=0 Dec 01 03:22:39 crc kubenswrapper[4880]: I1201 03:22:39.119497 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" event={"ID":"c984183e-550c-4212-bbb1-daa09dc6ea4e","Type":"ContainerDied","Data":"8f72425f85e58d7e7e0cdda22bccc6de62e2a4ab76bbc8fda0c0e17ddc26a92c"} Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.630672 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.792452 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjtlr\" (UniqueName: \"kubernetes.io/projected/c984183e-550c-4212-bbb1-daa09dc6ea4e-kube-api-access-vjtlr\") pod \"c984183e-550c-4212-bbb1-daa09dc6ea4e\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.792689 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-inventory\") pod \"c984183e-550c-4212-bbb1-daa09dc6ea4e\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.792791 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-bootstrap-combined-ca-bundle\") pod \"c984183e-550c-4212-bbb1-daa09dc6ea4e\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.793538 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-ssh-key\") pod \"c984183e-550c-4212-bbb1-daa09dc6ea4e\" (UID: \"c984183e-550c-4212-bbb1-daa09dc6ea4e\") " Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.799134 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c984183e-550c-4212-bbb1-daa09dc6ea4e-kube-api-access-vjtlr" (OuterVolumeSpecName: "kube-api-access-vjtlr") pod "c984183e-550c-4212-bbb1-daa09dc6ea4e" (UID: "c984183e-550c-4212-bbb1-daa09dc6ea4e"). InnerVolumeSpecName "kube-api-access-vjtlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.803103 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c984183e-550c-4212-bbb1-daa09dc6ea4e" (UID: "c984183e-550c-4212-bbb1-daa09dc6ea4e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.820434 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-inventory" (OuterVolumeSpecName: "inventory") pod "c984183e-550c-4212-bbb1-daa09dc6ea4e" (UID: "c984183e-550c-4212-bbb1-daa09dc6ea4e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.827021 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c984183e-550c-4212-bbb1-daa09dc6ea4e" (UID: "c984183e-550c-4212-bbb1-daa09dc6ea4e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.896479 4880 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.896511 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.896520 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjtlr\" (UniqueName: \"kubernetes.io/projected/c984183e-550c-4212-bbb1-daa09dc6ea4e-kube-api-access-vjtlr\") on node \"crc\" DevicePath \"\"" Dec 01 03:22:40 crc kubenswrapper[4880]: I1201 03:22:40.896530 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c984183e-550c-4212-bbb1-daa09dc6ea4e-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.150399 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" event={"ID":"c984183e-550c-4212-bbb1-daa09dc6ea4e","Type":"ContainerDied","Data":"70099117ecccdcdf0cb6d3ff0800139218acc958233ef44e784c3d1b7c8d1d3a"} Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.150451 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70099117ecccdcdf0cb6d3ff0800139218acc958233ef44e784c3d1b7c8d1d3a" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.150562 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pjcdb" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.270005 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s"] Dec 01 03:22:41 crc kubenswrapper[4880]: E1201 03:22:41.270567 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c984183e-550c-4212-bbb1-daa09dc6ea4e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.270592 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c984183e-550c-4212-bbb1-daa09dc6ea4e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 03:22:41 crc kubenswrapper[4880]: E1201 03:22:41.270624 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerName="registry-server" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.270635 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerName="registry-server" Dec 01 03:22:41 crc kubenswrapper[4880]: E1201 03:22:41.270665 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerName="extract-utilities" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.270673 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerName="extract-utilities" Dec 01 03:22:41 crc kubenswrapper[4880]: E1201 03:22:41.270718 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerName="extract-content" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.270727 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerName="extract-content" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.270996 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="c984183e-550c-4212-bbb1-daa09dc6ea4e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.271023 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="094eb0a2-99ef-4e09-a793-be7b89007efc" containerName="registry-server" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.271857 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.275146 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.275532 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.275742 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.275930 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.285033 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s"] Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.412472 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9lqg\" (UniqueName: \"kubernetes.io/projected/a89d6e65-6370-4958-9f09-32f27bbcd834-kube-api-access-f9lqg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.412514 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.412622 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.515393 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.515593 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.515809 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9lqg\" (UniqueName: \"kubernetes.io/projected/a89d6e65-6370-4958-9f09-32f27bbcd834-kube-api-access-f9lqg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.519556 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.520561 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.533829 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9lqg\" (UniqueName: \"kubernetes.io/projected/a89d6e65-6370-4958-9f09-32f27bbcd834-kube-api-access-f9lqg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:41 crc kubenswrapper[4880]: I1201 03:22:41.606445 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:22:42 crc kubenswrapper[4880]: I1201 03:22:42.180788 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s"] Dec 01 03:22:43 crc kubenswrapper[4880]: I1201 03:22:43.185388 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" event={"ID":"a89d6e65-6370-4958-9f09-32f27bbcd834","Type":"ContainerStarted","Data":"fe49a76932645e3cef8eeb121db7f775f58bcb76961ecb4f2ed00bef23ec2c62"} Dec 01 03:22:43 crc kubenswrapper[4880]: I1201 03:22:43.185832 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" event={"ID":"a89d6e65-6370-4958-9f09-32f27bbcd834","Type":"ContainerStarted","Data":"0960e5d372abbd1cefa4398462be492322dbcfb8e934bc933f4c710815e1f538"} Dec 01 03:22:43 crc kubenswrapper[4880]: I1201 03:22:43.217800 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" podStartSLOduration=1.708811625 podStartE2EDuration="2.217772628s" podCreationTimestamp="2025-12-01 03:22:41 +0000 UTC" firstStartedPulling="2025-12-01 03:22:42.197167124 +0000 UTC m=+1591.708421496" lastFinishedPulling="2025-12-01 03:22:42.706128117 +0000 UTC m=+1592.217382499" observedRunningTime="2025-12-01 03:22:43.206297961 +0000 UTC m=+1592.717552393" watchObservedRunningTime="2025-12-01 03:22:43.217772628 +0000 UTC m=+1592.729027040" Dec 01 03:22:46 crc kubenswrapper[4880]: I1201 03:22:46.062676 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-sk25r"] Dec 01 03:22:46 crc kubenswrapper[4880]: I1201 03:22:46.075369 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a5dd-account-create-update-zzwzw"] Dec 01 03:22:46 crc kubenswrapper[4880]: I1201 03:22:46.086917 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-sk25r"] Dec 01 03:22:46 crc kubenswrapper[4880]: I1201 03:22:46.097585 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a5dd-account-create-update-zzwzw"] Dec 01 03:22:46 crc kubenswrapper[4880]: I1201 03:22:46.798166 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06daccda-d4f8-43c3-8a2d-b2ebad9e89be" path="/var/lib/kubelet/pods/06daccda-d4f8-43c3-8a2d-b2ebad9e89be/volumes" Dec 01 03:22:46 crc kubenswrapper[4880]: I1201 03:22:46.800377 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f346c3-d584-4c71-8ee4-605e13ab1333" path="/var/lib/kubelet/pods/15f346c3-d584-4c71-8ee4-605e13ab1333/volumes" Dec 01 03:22:47 crc kubenswrapper[4880]: I1201 03:22:47.035073 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-957c-account-create-update-l5xmj"] Dec 01 03:22:47 crc kubenswrapper[4880]: I1201 03:22:47.048141 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-957c-account-create-update-l5xmj"] Dec 01 03:22:47 crc kubenswrapper[4880]: I1201 03:22:47.063170 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2a47-account-create-update-6l74s"] Dec 01 03:22:47 crc kubenswrapper[4880]: I1201 03:22:47.076632 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2a47-account-create-update-6l74s"] Dec 01 03:22:47 crc kubenswrapper[4880]: I1201 03:22:47.088948 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-ps5qj"] Dec 01 03:22:47 crc kubenswrapper[4880]: I1201 03:22:47.100344 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4mtxq"] Dec 01 03:22:47 crc kubenswrapper[4880]: I1201 03:22:47.109804 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-ps5qj"] Dec 01 03:22:47 crc kubenswrapper[4880]: I1201 03:22:47.118491 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4mtxq"] Dec 01 03:22:48 crc kubenswrapper[4880]: I1201 03:22:48.808090 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dda7a8-466c-4937-a3ba-dda232572d97" path="/var/lib/kubelet/pods/56dda7a8-466c-4937-a3ba-dda232572d97/volumes" Dec 01 03:22:48 crc kubenswrapper[4880]: I1201 03:22:48.810445 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a" path="/var/lib/kubelet/pods/d4bd4dff-a4e3-4e6f-98f3-4f8a00463f5a/volumes" Dec 01 03:22:48 crc kubenswrapper[4880]: I1201 03:22:48.813647 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8526178-42ba-4e79-bbe7-e52de3593c59" path="/var/lib/kubelet/pods/d8526178-42ba-4e79-bbe7-e52de3593c59/volumes" Dec 01 03:22:48 crc kubenswrapper[4880]: I1201 03:22:48.816642 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb221610-a34c-4fd4-9789-9cf67fb330c7" path="/var/lib/kubelet/pods/eb221610-a34c-4fd4-9789-9cf67fb330c7/volumes" Dec 01 03:22:49 crc kubenswrapper[4880]: I1201 03:22:49.784993 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:22:49 crc kubenswrapper[4880]: E1201 03:22:49.785581 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:23:01 crc kubenswrapper[4880]: I1201 03:23:01.784811 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:23:01 crc kubenswrapper[4880]: E1201 03:23:01.785587 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:23:07 crc kubenswrapper[4880]: I1201 03:23:07.919388 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7467d8cff5-62dbn" podUID="345c72ac-f2df-430d-8a61-9416bdda67a9" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 01 03:23:15 crc kubenswrapper[4880]: I1201 03:23:15.057091 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mgfdp"] Dec 01 03:23:15 crc kubenswrapper[4880]: I1201 03:23:15.071816 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mgfdp"] Dec 01 03:23:16 crc kubenswrapper[4880]: I1201 03:23:16.786586 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:23:16 crc kubenswrapper[4880]: E1201 03:23:16.788372 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:23:16 crc kubenswrapper[4880]: I1201 03:23:16.805504 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3bf429-3ea4-43b1-a5c3-34c138ba8e77" path="/var/lib/kubelet/pods/2a3bf429-3ea4-43b1-a5c3-34c138ba8e77/volumes" Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.047891 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-258b-account-create-update-kscpx"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.062574 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-j7kh9"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.072718 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-688c-account-create-update-xvv2g"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.084404 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-5ff8k"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.091730 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-258b-account-create-update-kscpx"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.099079 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-s7glh"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.105898 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-5ff8k"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.112718 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-688c-account-create-update-xvv2g"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.119002 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-s7glh"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.125301 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-j7kh9"] Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.807141 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7554d9ac-da16-4f66-8174-cec776c1cb09" path="/var/lib/kubelet/pods/7554d9ac-da16-4f66-8174-cec776c1cb09/volumes" Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.809345 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c09d440-d3ba-4284-be20-bd4853fdbd6a" path="/var/lib/kubelet/pods/8c09d440-d3ba-4284-be20-bd4853fdbd6a/volumes" Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.812992 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a939664f-b676-4f49-9e2f-69dc060cd7aa" path="/var/lib/kubelet/pods/a939664f-b676-4f49-9e2f-69dc060cd7aa/volumes" Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.815515 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1" path="/var/lib/kubelet/pods/aa095e4c-2b8c-4a33-b1a1-27c3c9d7cff1/volumes" Dec 01 03:23:26 crc kubenswrapper[4880]: I1201 03:23:26.817666 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f183f4ac-adb3-4020-80c4-a06486c2976f" path="/var/lib/kubelet/pods/f183f4ac-adb3-4020-80c4-a06486c2976f/volumes" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.037993 4880 scope.go:117] "RemoveContainer" containerID="c64e32e64219008c9f8d250cf58b5f9e40a8b18b4c42fd5ac4b716867172e387" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.073404 4880 scope.go:117] "RemoveContainer" containerID="63c8cc42663434f16eff74be28fa75ae0a8f81f467f21a565445dc37f983329f" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.154620 4880 scope.go:117] "RemoveContainer" containerID="362af1ed8776c66f59098e97377a2e613a54789eb0e6eebaa83f0ee7ccb65306" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.198245 4880 scope.go:117] "RemoveContainer" containerID="1f0a64783561e25499bb51dce0dcddb66ba691e8f7c8460ce8d7520bfd02659e" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.234928 4880 scope.go:117] "RemoveContainer" containerID="9a9d4a3a67a5baa7a24d90ed41625af2b58838850fead4c4e170127d086c498f" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.277639 4880 scope.go:117] "RemoveContainer" containerID="2de4aea1c9158835b392bcd2462f5d5f849e27590ff7f6c5a176f1ea0e19ba0f" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.317607 4880 scope.go:117] "RemoveContainer" containerID="6204f7b967c846e7a9a684207a57b4825b0ed6db6517ef47744618ea8b0c9fd7" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.346238 4880 scope.go:117] "RemoveContainer" containerID="68214ba28a236cfcda1b4faa9f55a39a79dfa543b3fc94dad24e802ab74d8ed6" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.381279 4880 scope.go:117] "RemoveContainer" containerID="a55ded1176d47ac89be030b8ce31e8fbc696b484cef607baa1cdd27860f7a00a" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.409414 4880 scope.go:117] "RemoveContainer" containerID="d410b2404658c69bf4a4a3ec97fa51807261e5bc600a6be148d1b4c88e93e6ca" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.446719 4880 scope.go:117] "RemoveContainer" containerID="d2534b9c8eca3c8f76ea66f02e8308f9b5838c333f2f1d8b570eda2dad698e74" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.479609 4880 scope.go:117] "RemoveContainer" containerID="309254ecd3c9a4cf48a3b65a92ba0706917850f2fdf3c31bec1c3538b99ed176" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.507379 4880 scope.go:117] "RemoveContainer" containerID="bedff099db1d04e247cfea937e8fd7e5185ef5babd7a9aa274938f11f68a3d8d" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.536079 4880 scope.go:117] "RemoveContainer" containerID="6f72c1a36c8c0208c0c105043a1c5c07a6cd0a3d190728e948586fb34efca663" Dec 01 03:23:29 crc kubenswrapper[4880]: I1201 03:23:29.783492 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:23:29 crc kubenswrapper[4880]: E1201 03:23:29.783754 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:23:30 crc kubenswrapper[4880]: I1201 03:23:30.037453 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d2e-account-create-update-9m5g5"] Dec 01 03:23:30 crc kubenswrapper[4880]: I1201 03:23:30.052226 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-458c-account-create-update-7884x"] Dec 01 03:23:30 crc kubenswrapper[4880]: I1201 03:23:30.067930 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-458c-account-create-update-7884x"] Dec 01 03:23:30 crc kubenswrapper[4880]: I1201 03:23:30.078353 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-d7qqc"] Dec 01 03:23:30 crc kubenswrapper[4880]: I1201 03:23:30.087647 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5d2e-account-create-update-9m5g5"] Dec 01 03:23:30 crc kubenswrapper[4880]: I1201 03:23:30.097541 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-d7qqc"] Dec 01 03:23:30 crc kubenswrapper[4880]: I1201 03:23:30.831570 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2943e7b1-9d51-4f2e-b02f-f6725dd63c74" path="/var/lib/kubelet/pods/2943e7b1-9d51-4f2e-b02f-f6725dd63c74/volumes" Dec 01 03:23:30 crc kubenswrapper[4880]: I1201 03:23:30.833852 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60589a93-d998-4636-9220-053ff2f8384c" path="/var/lib/kubelet/pods/60589a93-d998-4636-9220-053ff2f8384c/volumes" Dec 01 03:23:30 crc kubenswrapper[4880]: I1201 03:23:30.835182 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d55f1968-78c3-4b8d-9fa9-2b2807665167" path="/var/lib/kubelet/pods/d55f1968-78c3-4b8d-9fa9-2b2807665167/volumes" Dec 01 03:23:34 crc kubenswrapper[4880]: I1201 03:23:34.041785 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ztwtt"] Dec 01 03:23:34 crc kubenswrapper[4880]: I1201 03:23:34.051469 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ztwtt"] Dec 01 03:23:34 crc kubenswrapper[4880]: I1201 03:23:34.802208 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a60673-7e16-4057-8c8b-1c0b81de2a32" path="/var/lib/kubelet/pods/69a60673-7e16-4057-8c8b-1c0b81de2a32/volumes" Dec 01 03:23:42 crc kubenswrapper[4880]: I1201 03:23:42.784167 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:23:42 crc kubenswrapper[4880]: E1201 03:23:42.785371 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:23:53 crc kubenswrapper[4880]: I1201 03:23:53.785661 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:23:53 crc kubenswrapper[4880]: E1201 03:23:53.788118 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:24:07 crc kubenswrapper[4880]: I1201 03:24:07.784457 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:24:07 crc kubenswrapper[4880]: E1201 03:24:07.786198 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:24:15 crc kubenswrapper[4880]: I1201 03:24:15.065243 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8llx4"] Dec 01 03:24:15 crc kubenswrapper[4880]: I1201 03:24:15.081039 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8llx4"] Dec 01 03:24:16 crc kubenswrapper[4880]: I1201 03:24:16.803118 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8042faf-fbbd-4bc0-9f82-6d077bb32a5d" path="/var/lib/kubelet/pods/e8042faf-fbbd-4bc0-9f82-6d077bb32a5d/volumes" Dec 01 03:24:19 crc kubenswrapper[4880]: I1201 03:24:19.784064 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:24:19 crc kubenswrapper[4880]: E1201 03:24:19.784834 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:24:20 crc kubenswrapper[4880]: I1201 03:24:20.335968 4880 generic.go:334] "Generic (PLEG): container finished" podID="a89d6e65-6370-4958-9f09-32f27bbcd834" containerID="fe49a76932645e3cef8eeb121db7f775f58bcb76961ecb4f2ed00bef23ec2c62" exitCode=0 Dec 01 03:24:20 crc kubenswrapper[4880]: I1201 03:24:20.336032 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" event={"ID":"a89d6e65-6370-4958-9f09-32f27bbcd834","Type":"ContainerDied","Data":"fe49a76932645e3cef8eeb121db7f775f58bcb76961ecb4f2ed00bef23ec2c62"} Dec 01 03:24:21 crc kubenswrapper[4880]: I1201 03:24:21.798606 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:24:21 crc kubenswrapper[4880]: I1201 03:24:21.979750 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9lqg\" (UniqueName: \"kubernetes.io/projected/a89d6e65-6370-4958-9f09-32f27bbcd834-kube-api-access-f9lqg\") pod \"a89d6e65-6370-4958-9f09-32f27bbcd834\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " Dec 01 03:24:21 crc kubenswrapper[4880]: I1201 03:24:21.979796 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-inventory\") pod \"a89d6e65-6370-4958-9f09-32f27bbcd834\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " Dec 01 03:24:21 crc kubenswrapper[4880]: I1201 03:24:21.979968 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-ssh-key\") pod \"a89d6e65-6370-4958-9f09-32f27bbcd834\" (UID: \"a89d6e65-6370-4958-9f09-32f27bbcd834\") " Dec 01 03:24:21 crc kubenswrapper[4880]: I1201 03:24:21.987742 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a89d6e65-6370-4958-9f09-32f27bbcd834-kube-api-access-f9lqg" (OuterVolumeSpecName: "kube-api-access-f9lqg") pod "a89d6e65-6370-4958-9f09-32f27bbcd834" (UID: "a89d6e65-6370-4958-9f09-32f27bbcd834"). InnerVolumeSpecName "kube-api-access-f9lqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.024673 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-inventory" (OuterVolumeSpecName: "inventory") pod "a89d6e65-6370-4958-9f09-32f27bbcd834" (UID: "a89d6e65-6370-4958-9f09-32f27bbcd834"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.033482 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a89d6e65-6370-4958-9f09-32f27bbcd834" (UID: "a89d6e65-6370-4958-9f09-32f27bbcd834"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.082102 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9lqg\" (UniqueName: \"kubernetes.io/projected/a89d6e65-6370-4958-9f09-32f27bbcd834-kube-api-access-f9lqg\") on node \"crc\" DevicePath \"\"" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.082150 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.082167 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a89d6e65-6370-4958-9f09-32f27bbcd834-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.360982 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" event={"ID":"a89d6e65-6370-4958-9f09-32f27bbcd834","Type":"ContainerDied","Data":"0960e5d372abbd1cefa4398462be492322dbcfb8e934bc933f4c710815e1f538"} Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.361027 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0960e5d372abbd1cefa4398462be492322dbcfb8e934bc933f4c710815e1f538" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.361349 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2vp4s" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.539060 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59"] Dec 01 03:24:22 crc kubenswrapper[4880]: E1201 03:24:22.539512 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89d6e65-6370-4958-9f09-32f27bbcd834" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.539534 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89d6e65-6370-4958-9f09-32f27bbcd834" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.539779 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89d6e65-6370-4958-9f09-32f27bbcd834" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.541266 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.543834 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.543982 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.543842 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.543927 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.556433 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59"] Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.693054 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrtg\" (UniqueName: \"kubernetes.io/projected/07c73146-2795-48e8-af04-e542d7c8047a-kube-api-access-fkrtg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v8m59\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.693321 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v8m59\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.693376 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v8m59\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.796291 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v8m59\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.796336 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v8m59\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.796427 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrtg\" (UniqueName: \"kubernetes.io/projected/07c73146-2795-48e8-af04-e542d7c8047a-kube-api-access-fkrtg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v8m59\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.803712 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v8m59\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.803713 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v8m59\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.821590 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrtg\" (UniqueName: \"kubernetes.io/projected/07c73146-2795-48e8-af04-e542d7c8047a-kube-api-access-fkrtg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v8m59\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:22 crc kubenswrapper[4880]: I1201 03:24:22.859057 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:24:23 crc kubenswrapper[4880]: I1201 03:24:23.454078 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59"] Dec 01 03:24:24 crc kubenswrapper[4880]: I1201 03:24:24.387235 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" event={"ID":"07c73146-2795-48e8-af04-e542d7c8047a","Type":"ContainerStarted","Data":"a654dd007dd3b63f65dc35b11e761e49d3d3fafcb6abc4e436a45c4699fcce67"} Dec 01 03:24:24 crc kubenswrapper[4880]: I1201 03:24:24.387562 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" event={"ID":"07c73146-2795-48e8-af04-e542d7c8047a","Type":"ContainerStarted","Data":"948337c1c1682a4adc7131c9105b5340241db718481209959da1e0db95d1e657"} Dec 01 03:24:24 crc kubenswrapper[4880]: I1201 03:24:24.412311 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" podStartSLOduration=1.952092567 podStartE2EDuration="2.412288076s" podCreationTimestamp="2025-12-01 03:24:22 +0000 UTC" firstStartedPulling="2025-12-01 03:24:23.474242521 +0000 UTC m=+1692.985496913" lastFinishedPulling="2025-12-01 03:24:23.93443804 +0000 UTC m=+1693.445692422" observedRunningTime="2025-12-01 03:24:24.4043524 +0000 UTC m=+1693.915606802" watchObservedRunningTime="2025-12-01 03:24:24.412288076 +0000 UTC m=+1693.923542448" Dec 01 03:24:29 crc kubenswrapper[4880]: I1201 03:24:29.816209 4880 scope.go:117] "RemoveContainer" containerID="fb379bdd3a442c305680db2578fa26329d41018e30eb1a205f849f08efa76745" Dec 01 03:24:29 crc kubenswrapper[4880]: I1201 03:24:29.871419 4880 scope.go:117] "RemoveContainer" containerID="aad484185ff39742febe9c063a22074e4ccb7375c48ffe360be7dd584867dbaa" Dec 01 03:24:29 crc kubenswrapper[4880]: I1201 03:24:29.920982 4880 scope.go:117] "RemoveContainer" containerID="b7a897e083947741245c6b8affa4e798206c5d4101e7dfae27863946a196d592" Dec 01 03:24:29 crc kubenswrapper[4880]: I1201 03:24:29.968207 4880 scope.go:117] "RemoveContainer" containerID="0c0fb2cac9fc5c19b21b0512ca3ba60aaae41ca8e5c2f0942eecbfdfe896212e" Dec 01 03:24:30 crc kubenswrapper[4880]: I1201 03:24:30.018038 4880 scope.go:117] "RemoveContainer" containerID="5393a7f82e41055e897f6d4d3005a21b45b947393927468ecef374f98ee37a40" Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.058432 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-gxfjq"] Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.069546 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-gxfjq"] Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.077862 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zncgf"] Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.085414 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9vgls"] Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.093435 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9vgls"] Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.101052 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zncgf"] Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.784645 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:24:32 crc kubenswrapper[4880]: E1201 03:24:32.785152 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.795019 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a15fa76f-467b-485e-96b8-5fdec71318f5" path="/var/lib/kubelet/pods/a15fa76f-467b-485e-96b8-5fdec71318f5/volumes" Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.796142 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3288e77-4e64-48d4-995e-93abe07bf1bd" path="/var/lib/kubelet/pods/d3288e77-4e64-48d4-995e-93abe07bf1bd/volumes" Dec 01 03:24:32 crc kubenswrapper[4880]: I1201 03:24:32.796961 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb21b71d-303a-4e92-9086-789ded0f11fa" path="/var/lib/kubelet/pods/fb21b71d-303a-4e92-9086-789ded0f11fa/volumes" Dec 01 03:24:44 crc kubenswrapper[4880]: I1201 03:24:44.784426 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:24:44 crc kubenswrapper[4880]: E1201 03:24:44.813400 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:24:46 crc kubenswrapper[4880]: I1201 03:24:46.044021 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-mgzm4"] Dec 01 03:24:46 crc kubenswrapper[4880]: I1201 03:24:46.055589 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-mgzm4"] Dec 01 03:24:46 crc kubenswrapper[4880]: I1201 03:24:46.795974 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ee6695-1440-4087-b17a-0af2371eceed" path="/var/lib/kubelet/pods/81ee6695-1440-4087-b17a-0af2371eceed/volumes" Dec 01 03:24:47 crc kubenswrapper[4880]: I1201 03:24:47.034345 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-lczzw"] Dec 01 03:24:47 crc kubenswrapper[4880]: I1201 03:24:47.042554 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-lczzw"] Dec 01 03:24:48 crc kubenswrapper[4880]: I1201 03:24:48.811751 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe59d4ff-1b09-4404-a45d-4b2b73e3ac31" path="/var/lib/kubelet/pods/fe59d4ff-1b09-4404-a45d-4b2b73e3ac31/volumes" Dec 01 03:24:58 crc kubenswrapper[4880]: I1201 03:24:58.784445 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:24:58 crc kubenswrapper[4880]: E1201 03:24:58.785679 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:25:13 crc kubenswrapper[4880]: I1201 03:25:13.784284 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:25:13 crc kubenswrapper[4880]: E1201 03:25:13.785018 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:25:24 crc kubenswrapper[4880]: I1201 03:25:24.784456 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:25:24 crc kubenswrapper[4880]: E1201 03:25:24.785552 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:25:30 crc kubenswrapper[4880]: I1201 03:25:30.155214 4880 scope.go:117] "RemoveContainer" containerID="217162db11bdf228ddd8041b41bf9ff03ce0a43a77ef72edcbc5ede8c120fbf5" Dec 01 03:25:30 crc kubenswrapper[4880]: I1201 03:25:30.186802 4880 scope.go:117] "RemoveContainer" containerID="eac99c63e989f554cebf3d5f8c8b31b2bddeb7c6171daeb42033b08fe99911e8" Dec 01 03:25:30 crc kubenswrapper[4880]: I1201 03:25:30.255350 4880 scope.go:117] "RemoveContainer" containerID="303e097529623c44ececcbd246b782561ac6ab40f4a2853928bd50f834c952f0" Dec 01 03:25:30 crc kubenswrapper[4880]: I1201 03:25:30.303826 4880 scope.go:117] "RemoveContainer" containerID="5476647e93f626d6147fb6155e31427e9ea0f6df268694de98f70d8b9f9f1c1c" Dec 01 03:25:30 crc kubenswrapper[4880]: I1201 03:25:30.363212 4880 scope.go:117] "RemoveContainer" containerID="58a63e7f15c7a4d96e51e54f31ac1e76f800c5e5eeed04e56caa5760732a9e99" Dec 01 03:25:37 crc kubenswrapper[4880]: I1201 03:25:37.784290 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:25:37 crc kubenswrapper[4880]: E1201 03:25:37.787107 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.079218 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xxfdq"] Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.089746 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6403-account-create-update-89c7r"] Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.104100 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dpchb"] Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.116081 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6403-account-create-update-89c7r"] Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.123101 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-sfwjq"] Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.129143 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dpchb"] Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.134942 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xxfdq"] Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.152648 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-sfwjq"] Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.299814 4880 generic.go:334] "Generic (PLEG): container finished" podID="07c73146-2795-48e8-af04-e542d7c8047a" containerID="a654dd007dd3b63f65dc35b11e761e49d3d3fafcb6abc4e436a45c4699fcce67" exitCode=0 Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.299869 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" event={"ID":"07c73146-2795-48e8-af04-e542d7c8047a","Type":"ContainerDied","Data":"a654dd007dd3b63f65dc35b11e761e49d3d3fafcb6abc4e436a45c4699fcce67"} Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.799491 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bef3f01-6342-40a7-9213-9358a20b7efe" path="/var/lib/kubelet/pods/0bef3f01-6342-40a7-9213-9358a20b7efe/volumes" Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.801836 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa52299-6b4d-47d0-8250-4707b96770f9" path="/var/lib/kubelet/pods/4fa52299-6b4d-47d0-8250-4707b96770f9/volumes" Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.803150 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="942e9ee3-6a18-426c-ad47-fe3ba8ae4213" path="/var/lib/kubelet/pods/942e9ee3-6a18-426c-ad47-fe3ba8ae4213/volumes" Dec 01 03:25:44 crc kubenswrapper[4880]: I1201 03:25:44.805259 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2489b51-6f7f-4f48-b614-870ab86df12a" path="/var/lib/kubelet/pods/a2489b51-6f7f-4f48-b614-870ab86df12a/volumes" Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.045713 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2946-account-create-update-dq78p"] Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.053958 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e8e4-account-create-update-2vhq8"] Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.074410 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2946-account-create-update-dq78p"] Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.083129 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e8e4-account-create-update-2vhq8"] Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.756051 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.850200 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-inventory\") pod \"07c73146-2795-48e8-af04-e542d7c8047a\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.850246 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkrtg\" (UniqueName: \"kubernetes.io/projected/07c73146-2795-48e8-af04-e542d7c8047a-kube-api-access-fkrtg\") pod \"07c73146-2795-48e8-af04-e542d7c8047a\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.850358 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-ssh-key\") pod \"07c73146-2795-48e8-af04-e542d7c8047a\" (UID: \"07c73146-2795-48e8-af04-e542d7c8047a\") " Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.855669 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c73146-2795-48e8-af04-e542d7c8047a-kube-api-access-fkrtg" (OuterVolumeSpecName: "kube-api-access-fkrtg") pod "07c73146-2795-48e8-af04-e542d7c8047a" (UID: "07c73146-2795-48e8-af04-e542d7c8047a"). InnerVolumeSpecName "kube-api-access-fkrtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.879976 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "07c73146-2795-48e8-af04-e542d7c8047a" (UID: "07c73146-2795-48e8-af04-e542d7c8047a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.894196 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-inventory" (OuterVolumeSpecName: "inventory") pod "07c73146-2795-48e8-af04-e542d7c8047a" (UID: "07c73146-2795-48e8-af04-e542d7c8047a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.951856 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.951901 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c73146-2795-48e8-af04-e542d7c8047a-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:25:45 crc kubenswrapper[4880]: I1201 03:25:45.951914 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkrtg\" (UniqueName: \"kubernetes.io/projected/07c73146-2795-48e8-af04-e542d7c8047a-kube-api-access-fkrtg\") on node \"crc\" DevicePath \"\"" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.352264 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" event={"ID":"07c73146-2795-48e8-af04-e542d7c8047a","Type":"ContainerDied","Data":"948337c1c1682a4adc7131c9105b5340241db718481209959da1e0db95d1e657"} Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.352315 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="948337c1c1682a4adc7131c9105b5340241db718481209959da1e0db95d1e657" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.352324 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v8m59" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.435197 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw"] Dec 01 03:25:46 crc kubenswrapper[4880]: E1201 03:25:46.435544 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c73146-2795-48e8-af04-e542d7c8047a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.435561 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c73146-2795-48e8-af04-e542d7c8047a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.435751 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c73146-2795-48e8-af04-e542d7c8047a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.436311 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.438670 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.439689 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.440006 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.455963 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.470063 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-295xw\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.470191 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-295xw\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.470259 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsjh\" (UniqueName: \"kubernetes.io/projected/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-kube-api-access-rvsjh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-295xw\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.490178 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw"] Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.572269 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-295xw\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.572358 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvsjh\" (UniqueName: \"kubernetes.io/projected/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-kube-api-access-rvsjh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-295xw\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.573011 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-295xw\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.579249 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-295xw\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.580352 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-295xw\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.587037 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvsjh\" (UniqueName: \"kubernetes.io/projected/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-kube-api-access-rvsjh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-295xw\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.768047 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.807953 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c5dd71-4081-4b8c-bb4c-c7e0087c7670" path="/var/lib/kubelet/pods/38c5dd71-4081-4b8c-bb4c-c7e0087c7670/volumes" Dec 01 03:25:46 crc kubenswrapper[4880]: I1201 03:25:46.809836 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed214c1-1d55-4063-929f-2c4b1d88f025" path="/var/lib/kubelet/pods/aed214c1-1d55-4063-929f-2c4b1d88f025/volumes" Dec 01 03:25:47 crc kubenswrapper[4880]: I1201 03:25:47.361500 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw"] Dec 01 03:25:48 crc kubenswrapper[4880]: I1201 03:25:48.372539 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" event={"ID":"a3c2b6e1-3b37-443c-90f7-f4e4060e1571","Type":"ContainerStarted","Data":"6766d8f76ac9be0de1036b32d3578815911559503fc6e87d48feaa35fd6d64bb"} Dec 01 03:25:48 crc kubenswrapper[4880]: I1201 03:25:48.372836 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" event={"ID":"a3c2b6e1-3b37-443c-90f7-f4e4060e1571","Type":"ContainerStarted","Data":"731f3acdb6ef54be29d2367868f5325f2ddef6402f57649a1485e523859865a3"} Dec 01 03:25:48 crc kubenswrapper[4880]: I1201 03:25:48.400967 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" podStartSLOduration=1.612697656 podStartE2EDuration="2.40095008s" podCreationTimestamp="2025-12-01 03:25:46 +0000 UTC" firstStartedPulling="2025-12-01 03:25:47.363544121 +0000 UTC m=+1776.874798493" lastFinishedPulling="2025-12-01 03:25:48.151796545 +0000 UTC m=+1777.663050917" observedRunningTime="2025-12-01 03:25:48.394529912 +0000 UTC m=+1777.905784284" watchObservedRunningTime="2025-12-01 03:25:48.40095008 +0000 UTC m=+1777.912204452" Dec 01 03:25:52 crc kubenswrapper[4880]: I1201 03:25:52.784293 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:25:52 crc kubenswrapper[4880]: E1201 03:25:52.784956 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:25:54 crc kubenswrapper[4880]: I1201 03:25:54.462964 4880 generic.go:334] "Generic (PLEG): container finished" podID="a3c2b6e1-3b37-443c-90f7-f4e4060e1571" containerID="6766d8f76ac9be0de1036b32d3578815911559503fc6e87d48feaa35fd6d64bb" exitCode=0 Dec 01 03:25:54 crc kubenswrapper[4880]: I1201 03:25:54.463083 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" event={"ID":"a3c2b6e1-3b37-443c-90f7-f4e4060e1571","Type":"ContainerDied","Data":"6766d8f76ac9be0de1036b32d3578815911559503fc6e87d48feaa35fd6d64bb"} Dec 01 03:25:55 crc kubenswrapper[4880]: I1201 03:25:55.904098 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:55 crc kubenswrapper[4880]: I1201 03:25:55.968528 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-inventory\") pod \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " Dec 01 03:25:55 crc kubenswrapper[4880]: I1201 03:25:55.968788 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvsjh\" (UniqueName: \"kubernetes.io/projected/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-kube-api-access-rvsjh\") pod \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " Dec 01 03:25:55 crc kubenswrapper[4880]: I1201 03:25:55.968910 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-ssh-key\") pod \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\" (UID: \"a3c2b6e1-3b37-443c-90f7-f4e4060e1571\") " Dec 01 03:25:55 crc kubenswrapper[4880]: I1201 03:25:55.981060 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-kube-api-access-rvsjh" (OuterVolumeSpecName: "kube-api-access-rvsjh") pod "a3c2b6e1-3b37-443c-90f7-f4e4060e1571" (UID: "a3c2b6e1-3b37-443c-90f7-f4e4060e1571"). InnerVolumeSpecName "kube-api-access-rvsjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.004510 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a3c2b6e1-3b37-443c-90f7-f4e4060e1571" (UID: "a3c2b6e1-3b37-443c-90f7-f4e4060e1571"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.005410 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-inventory" (OuterVolumeSpecName: "inventory") pod "a3c2b6e1-3b37-443c-90f7-f4e4060e1571" (UID: "a3c2b6e1-3b37-443c-90f7-f4e4060e1571"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.071073 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvsjh\" (UniqueName: \"kubernetes.io/projected/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-kube-api-access-rvsjh\") on node \"crc\" DevicePath \"\"" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.071106 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.071118 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3c2b6e1-3b37-443c-90f7-f4e4060e1571-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.485263 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" event={"ID":"a3c2b6e1-3b37-443c-90f7-f4e4060e1571","Type":"ContainerDied","Data":"731f3acdb6ef54be29d2367868f5325f2ddef6402f57649a1485e523859865a3"} Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.485334 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="731f3acdb6ef54be29d2367868f5325f2ddef6402f57649a1485e523859865a3" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.485428 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-295xw" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.578147 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj"] Dec 01 03:25:56 crc kubenswrapper[4880]: E1201 03:25:56.578621 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c2b6e1-3b37-443c-90f7-f4e4060e1571" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.578643 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c2b6e1-3b37-443c-90f7-f4e4060e1571" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.578893 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c2b6e1-3b37-443c-90f7-f4e4060e1571" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.579618 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.581627 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.602238 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.602449 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.602627 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.616049 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj"] Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.685278 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9f2kj\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.685346 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gx8\" (UniqueName: \"kubernetes.io/projected/d95faf69-eeea-42ae-a532-611cb648fd98-kube-api-access-m4gx8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9f2kj\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.685421 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9f2kj\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.787169 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9f2kj\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.787340 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9f2kj\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.788011 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gx8\" (UniqueName: \"kubernetes.io/projected/d95faf69-eeea-42ae-a532-611cb648fd98-kube-api-access-m4gx8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9f2kj\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.794241 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9f2kj\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.798065 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9f2kj\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.805007 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gx8\" (UniqueName: \"kubernetes.io/projected/d95faf69-eeea-42ae-a532-611cb648fd98-kube-api-access-m4gx8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9f2kj\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:56 crc kubenswrapper[4880]: I1201 03:25:56.902616 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:25:57 crc kubenswrapper[4880]: I1201 03:25:57.531201 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj"] Dec 01 03:25:57 crc kubenswrapper[4880]: W1201 03:25:57.537768 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd95faf69_eeea_42ae_a532_611cb648fd98.slice/crio-52c1275889813d19dc8507cc10142857be9cad1863838d388172e3d2937fa923 WatchSource:0}: Error finding container 52c1275889813d19dc8507cc10142857be9cad1863838d388172e3d2937fa923: Status 404 returned error can't find the container with id 52c1275889813d19dc8507cc10142857be9cad1863838d388172e3d2937fa923 Dec 01 03:25:58 crc kubenswrapper[4880]: I1201 03:25:58.504758 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" event={"ID":"d95faf69-eeea-42ae-a532-611cb648fd98","Type":"ContainerStarted","Data":"afe7a0e6a929178650cd890befc1010c2402ac15764dae52b30f80ea185e2c16"} Dec 01 03:25:58 crc kubenswrapper[4880]: I1201 03:25:58.505220 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" event={"ID":"d95faf69-eeea-42ae-a532-611cb648fd98","Type":"ContainerStarted","Data":"52c1275889813d19dc8507cc10142857be9cad1863838d388172e3d2937fa923"} Dec 01 03:25:58 crc kubenswrapper[4880]: I1201 03:25:58.529812 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" podStartSLOduration=2.038543812 podStartE2EDuration="2.529796189s" podCreationTimestamp="2025-12-01 03:25:56 +0000 UTC" firstStartedPulling="2025-12-01 03:25:57.54059436 +0000 UTC m=+1787.051848762" lastFinishedPulling="2025-12-01 03:25:58.031846727 +0000 UTC m=+1787.543101139" observedRunningTime="2025-12-01 03:25:58.521278439 +0000 UTC m=+1788.032532811" watchObservedRunningTime="2025-12-01 03:25:58.529796189 +0000 UTC m=+1788.041050561" Dec 01 03:26:06 crc kubenswrapper[4880]: I1201 03:26:06.784915 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:26:06 crc kubenswrapper[4880]: E1201 03:26:06.785805 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:26:14 crc kubenswrapper[4880]: I1201 03:26:14.049685 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nj9mn"] Dec 01 03:26:14 crc kubenswrapper[4880]: I1201 03:26:14.057173 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nj9mn"] Dec 01 03:26:14 crc kubenswrapper[4880]: I1201 03:26:14.795245 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e7d851-5607-477c-a499-bee4568e24c2" path="/var/lib/kubelet/pods/b2e7d851-5607-477c-a499-bee4568e24c2/volumes" Dec 01 03:26:17 crc kubenswrapper[4880]: I1201 03:26:17.784384 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:26:17 crc kubenswrapper[4880]: E1201 03:26:17.785804 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:26:30 crc kubenswrapper[4880]: I1201 03:26:30.552983 4880 scope.go:117] "RemoveContainer" containerID="d793974c95a6850e01bbee10ff053831fd566ede014f05f8924bcf61b97e6a21" Dec 01 03:26:30 crc kubenswrapper[4880]: I1201 03:26:30.589312 4880 scope.go:117] "RemoveContainer" containerID="a6232350a32f7a10e748739f1dd8650e405671775321d90c82d8153da4212cb4" Dec 01 03:26:30 crc kubenswrapper[4880]: I1201 03:26:30.650578 4880 scope.go:117] "RemoveContainer" containerID="baa7c517890d2c84316a0a440ac34017d06df3834f1072adb37fbd9055bddeec" Dec 01 03:26:30 crc kubenswrapper[4880]: I1201 03:26:30.679794 4880 scope.go:117] "RemoveContainer" containerID="7e15f927cfcac2b60574fb7db3763113f41cd33991e0e8229ad99cf186a04662" Dec 01 03:26:30 crc kubenswrapper[4880]: I1201 03:26:30.736424 4880 scope.go:117] "RemoveContainer" containerID="09b809bfc0a7a2581058550cf215e4a962ab6fd51fd44ce8fabc99e946daaab3" Dec 01 03:26:30 crc kubenswrapper[4880]: I1201 03:26:30.780175 4880 scope.go:117] "RemoveContainer" containerID="1da8d8ab16010feb0c8d380da920b5076cba417a620ba24b4707433ebf27bcc5" Dec 01 03:26:30 crc kubenswrapper[4880]: I1201 03:26:30.839661 4880 scope.go:117] "RemoveContainer" containerID="4dc1f945c882a4d7c06c75fa753538910c0ddfaf0e27da2fa29be03cd5691e2b" Dec 01 03:26:31 crc kubenswrapper[4880]: I1201 03:26:31.784894 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:26:31 crc kubenswrapper[4880]: E1201 03:26:31.785443 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:26:39 crc kubenswrapper[4880]: I1201 03:26:39.073584 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-sn8nz"] Dec 01 03:26:39 crc kubenswrapper[4880]: I1201 03:26:39.086091 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-sn8nz"] Dec 01 03:26:40 crc kubenswrapper[4880]: I1201 03:26:40.802472 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0910838-ee7d-4d85-973d-4d34d331e684" path="/var/lib/kubelet/pods/f0910838-ee7d-4d85-973d-4d34d331e684/volumes" Dec 01 03:26:42 crc kubenswrapper[4880]: I1201 03:26:42.031695 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-szfn6"] Dec 01 03:26:42 crc kubenswrapper[4880]: I1201 03:26:42.042708 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-szfn6"] Dec 01 03:26:42 crc kubenswrapper[4880]: I1201 03:26:42.798101 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a96025ed-9ab7-4d57-be27-b2cc9b1f5d72" path="/var/lib/kubelet/pods/a96025ed-9ab7-4d57-be27-b2cc9b1f5d72/volumes" Dec 01 03:26:43 crc kubenswrapper[4880]: I1201 03:26:43.977334 4880 generic.go:334] "Generic (PLEG): container finished" podID="d95faf69-eeea-42ae-a532-611cb648fd98" containerID="afe7a0e6a929178650cd890befc1010c2402ac15764dae52b30f80ea185e2c16" exitCode=0 Dec 01 03:26:43 crc kubenswrapper[4880]: I1201 03:26:43.977405 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" event={"ID":"d95faf69-eeea-42ae-a532-611cb648fd98","Type":"ContainerDied","Data":"afe7a0e6a929178650cd890befc1010c2402ac15764dae52b30f80ea185e2c16"} Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.487076 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.600705 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4gx8\" (UniqueName: \"kubernetes.io/projected/d95faf69-eeea-42ae-a532-611cb648fd98-kube-api-access-m4gx8\") pod \"d95faf69-eeea-42ae-a532-611cb648fd98\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.601634 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-ssh-key\") pod \"d95faf69-eeea-42ae-a532-611cb648fd98\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.601985 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-inventory\") pod \"d95faf69-eeea-42ae-a532-611cb648fd98\" (UID: \"d95faf69-eeea-42ae-a532-611cb648fd98\") " Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.610103 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95faf69-eeea-42ae-a532-611cb648fd98-kube-api-access-m4gx8" (OuterVolumeSpecName: "kube-api-access-m4gx8") pod "d95faf69-eeea-42ae-a532-611cb648fd98" (UID: "d95faf69-eeea-42ae-a532-611cb648fd98"). InnerVolumeSpecName "kube-api-access-m4gx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.638681 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-inventory" (OuterVolumeSpecName: "inventory") pod "d95faf69-eeea-42ae-a532-611cb648fd98" (UID: "d95faf69-eeea-42ae-a532-611cb648fd98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.671022 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d95faf69-eeea-42ae-a532-611cb648fd98" (UID: "d95faf69-eeea-42ae-a532-611cb648fd98"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.703755 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.703790 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4gx8\" (UniqueName: \"kubernetes.io/projected/d95faf69-eeea-42ae-a532-611cb648fd98-kube-api-access-m4gx8\") on node \"crc\" DevicePath \"\"" Dec 01 03:26:45 crc kubenswrapper[4880]: I1201 03:26:45.703802 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d95faf69-eeea-42ae-a532-611cb648fd98-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.004614 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" event={"ID":"d95faf69-eeea-42ae-a532-611cb648fd98","Type":"ContainerDied","Data":"52c1275889813d19dc8507cc10142857be9cad1863838d388172e3d2937fa923"} Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.004677 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52c1275889813d19dc8507cc10142857be9cad1863838d388172e3d2937fa923" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.004710 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9f2kj" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.139205 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8"] Dec 01 03:26:46 crc kubenswrapper[4880]: E1201 03:26:46.139658 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95faf69-eeea-42ae-a532-611cb648fd98" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.139683 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95faf69-eeea-42ae-a532-611cb648fd98" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.139924 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95faf69-eeea-42ae-a532-611cb648fd98" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.140618 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.143719 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.143756 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.145326 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.145332 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.171784 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8"] Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.317653 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.317814 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gsh2\" (UniqueName: \"kubernetes.io/projected/604c61e1-dd09-42b7-b1c0-498876232002-kube-api-access-8gsh2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.317929 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.419707 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.419840 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gsh2\" (UniqueName: \"kubernetes.io/projected/604c61e1-dd09-42b7-b1c0-498876232002-kube-api-access-8gsh2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.419968 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.425387 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.428054 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.455615 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gsh2\" (UniqueName: \"kubernetes.io/projected/604c61e1-dd09-42b7-b1c0-498876232002-kube-api-access-8gsh2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.462336 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:26:46 crc kubenswrapper[4880]: I1201 03:26:46.784675 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:26:46 crc kubenswrapper[4880]: E1201 03:26:46.785171 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:26:47 crc kubenswrapper[4880]: I1201 03:26:47.088909 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8"] Dec 01 03:26:48 crc kubenswrapper[4880]: I1201 03:26:48.023753 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" event={"ID":"604c61e1-dd09-42b7-b1c0-498876232002","Type":"ContainerStarted","Data":"8052086f42633064a18d0728844584381d0af3072daefa2422394afe9cbcb1b5"} Dec 01 03:26:48 crc kubenswrapper[4880]: I1201 03:26:48.024159 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" event={"ID":"604c61e1-dd09-42b7-b1c0-498876232002","Type":"ContainerStarted","Data":"52dde519fbd4f0e72e90ebb90bb4f5d3794e0717c7183e54a46b74199ec03293"} Dec 01 03:26:48 crc kubenswrapper[4880]: I1201 03:26:48.046246 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" podStartSLOduration=1.419172351 podStartE2EDuration="2.046227513s" podCreationTimestamp="2025-12-01 03:26:46 +0000 UTC" firstStartedPulling="2025-12-01 03:26:47.096608322 +0000 UTC m=+1836.607862704" lastFinishedPulling="2025-12-01 03:26:47.723663494 +0000 UTC m=+1837.234917866" observedRunningTime="2025-12-01 03:26:48.046064129 +0000 UTC m=+1837.557318501" watchObservedRunningTime="2025-12-01 03:26:48.046227513 +0000 UTC m=+1837.557481895" Dec 01 03:26:59 crc kubenswrapper[4880]: I1201 03:26:59.784465 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:27:00 crc kubenswrapper[4880]: I1201 03:27:00.141709 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"de0a73b1010be3c3f56109b45fc94aace1b7ec1e62e0fed3d697920f77540b99"} Dec 01 03:27:22 crc kubenswrapper[4880]: I1201 03:27:22.046368 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sh8fk"] Dec 01 03:27:22 crc kubenswrapper[4880]: I1201 03:27:22.061267 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sh8fk"] Dec 01 03:27:22 crc kubenswrapper[4880]: I1201 03:27:22.800375 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b861996e-b33f-4277-8c71-60d20d61bad8" path="/var/lib/kubelet/pods/b861996e-b33f-4277-8c71-60d20d61bad8/volumes" Dec 01 03:27:30 crc kubenswrapper[4880]: I1201 03:27:30.973534 4880 scope.go:117] "RemoveContainer" containerID="b5ec7f12809be1c1adc7ec411288618cbebd970e10e05473f10057002a763eb1" Dec 01 03:27:31 crc kubenswrapper[4880]: I1201 03:27:31.017075 4880 scope.go:117] "RemoveContainer" containerID="8a80a684c3d3e4728f5a1f048e4520492094d4d1e4c81c677a984e56af3960fe" Dec 01 03:27:31 crc kubenswrapper[4880]: I1201 03:27:31.083918 4880 scope.go:117] "RemoveContainer" containerID="3af157b9be6e70652812f6f608cdd46d2fdf00287bbca11c1d4f7e89d54cd8c3" Dec 01 03:27:48 crc kubenswrapper[4880]: I1201 03:27:48.688719 4880 generic.go:334] "Generic (PLEG): container finished" podID="604c61e1-dd09-42b7-b1c0-498876232002" containerID="8052086f42633064a18d0728844584381d0af3072daefa2422394afe9cbcb1b5" exitCode=0 Dec 01 03:27:48 crc kubenswrapper[4880]: I1201 03:27:48.689683 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" event={"ID":"604c61e1-dd09-42b7-b1c0-498876232002","Type":"ContainerDied","Data":"8052086f42633064a18d0728844584381d0af3072daefa2422394afe9cbcb1b5"} Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.152170 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.317476 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-inventory\") pod \"604c61e1-dd09-42b7-b1c0-498876232002\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.317596 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-ssh-key\") pod \"604c61e1-dd09-42b7-b1c0-498876232002\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.317645 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gsh2\" (UniqueName: \"kubernetes.io/projected/604c61e1-dd09-42b7-b1c0-498876232002-kube-api-access-8gsh2\") pod \"604c61e1-dd09-42b7-b1c0-498876232002\" (UID: \"604c61e1-dd09-42b7-b1c0-498876232002\") " Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.326187 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604c61e1-dd09-42b7-b1c0-498876232002-kube-api-access-8gsh2" (OuterVolumeSpecName: "kube-api-access-8gsh2") pod "604c61e1-dd09-42b7-b1c0-498876232002" (UID: "604c61e1-dd09-42b7-b1c0-498876232002"). InnerVolumeSpecName "kube-api-access-8gsh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.370153 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-inventory" (OuterVolumeSpecName: "inventory") pod "604c61e1-dd09-42b7-b1c0-498876232002" (UID: "604c61e1-dd09-42b7-b1c0-498876232002"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.371213 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "604c61e1-dd09-42b7-b1c0-498876232002" (UID: "604c61e1-dd09-42b7-b1c0-498876232002"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.421023 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.421057 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/604c61e1-dd09-42b7-b1c0-498876232002-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.421070 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gsh2\" (UniqueName: \"kubernetes.io/projected/604c61e1-dd09-42b7-b1c0-498876232002-kube-api-access-8gsh2\") on node \"crc\" DevicePath \"\"" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.711287 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" event={"ID":"604c61e1-dd09-42b7-b1c0-498876232002","Type":"ContainerDied","Data":"52dde519fbd4f0e72e90ebb90bb4f5d3794e0717c7183e54a46b74199ec03293"} Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.711347 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52dde519fbd4f0e72e90ebb90bb4f5d3794e0717c7183e54a46b74199ec03293" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.711373 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wx9p8" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.829602 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-btd2n"] Dec 01 03:27:50 crc kubenswrapper[4880]: E1201 03:27:50.833521 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604c61e1-dd09-42b7-b1c0-498876232002" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.833543 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="604c61e1-dd09-42b7-b1c0-498876232002" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.837921 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="604c61e1-dd09-42b7-b1c0-498876232002" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.839424 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.852787 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.857840 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.858674 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.858949 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.859314 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-btd2n"] Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.930757 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-btd2n\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.930845 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fftdf\" (UniqueName: \"kubernetes.io/projected/c897e92c-5687-42f0-9467-03843beca9d4-kube-api-access-fftdf\") pod \"ssh-known-hosts-edpm-deployment-btd2n\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:50 crc kubenswrapper[4880]: I1201 03:27:50.932345 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-btd2n\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:51 crc kubenswrapper[4880]: I1201 03:27:51.034605 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-btd2n\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:51 crc kubenswrapper[4880]: I1201 03:27:51.035081 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-btd2n\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:51 crc kubenswrapper[4880]: I1201 03:27:51.036172 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fftdf\" (UniqueName: \"kubernetes.io/projected/c897e92c-5687-42f0-9467-03843beca9d4-kube-api-access-fftdf\") pod \"ssh-known-hosts-edpm-deployment-btd2n\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:51 crc kubenswrapper[4880]: I1201 03:27:51.041208 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-btd2n\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:51 crc kubenswrapper[4880]: I1201 03:27:51.048027 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-btd2n\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:51 crc kubenswrapper[4880]: I1201 03:27:51.066580 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fftdf\" (UniqueName: \"kubernetes.io/projected/c897e92c-5687-42f0-9467-03843beca9d4-kube-api-access-fftdf\") pod \"ssh-known-hosts-edpm-deployment-btd2n\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:51 crc kubenswrapper[4880]: I1201 03:27:51.176260 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:27:51 crc kubenswrapper[4880]: I1201 03:27:51.762860 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-btd2n"] Dec 01 03:27:51 crc kubenswrapper[4880]: W1201 03:27:51.776507 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc897e92c_5687_42f0_9467_03843beca9d4.slice/crio-a1d3226f851bc76f3be3befa49f87a07a2eae33499ac44d695f031403363f7c3 WatchSource:0}: Error finding container a1d3226f851bc76f3be3befa49f87a07a2eae33499ac44d695f031403363f7c3: Status 404 returned error can't find the container with id a1d3226f851bc76f3be3befa49f87a07a2eae33499ac44d695f031403363f7c3 Dec 01 03:27:51 crc kubenswrapper[4880]: I1201 03:27:51.780091 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 03:27:52 crc kubenswrapper[4880]: I1201 03:27:52.730215 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" event={"ID":"c897e92c-5687-42f0-9467-03843beca9d4","Type":"ContainerStarted","Data":"63d451cb2eb1c0e5043e6a3bb60190076e444143db54d25511d22971901408a1"} Dec 01 03:27:52 crc kubenswrapper[4880]: I1201 03:27:52.730524 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" event={"ID":"c897e92c-5687-42f0-9467-03843beca9d4","Type":"ContainerStarted","Data":"a1d3226f851bc76f3be3befa49f87a07a2eae33499ac44d695f031403363f7c3"} Dec 01 03:27:52 crc kubenswrapper[4880]: I1201 03:27:52.746101 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" podStartSLOduration=2.252580059 podStartE2EDuration="2.746077738s" podCreationTimestamp="2025-12-01 03:27:50 +0000 UTC" firstStartedPulling="2025-12-01 03:27:51.779555412 +0000 UTC m=+1901.290809814" lastFinishedPulling="2025-12-01 03:27:52.273053111 +0000 UTC m=+1901.784307493" observedRunningTime="2025-12-01 03:27:52.744812698 +0000 UTC m=+1902.256067080" watchObservedRunningTime="2025-12-01 03:27:52.746077738 +0000 UTC m=+1902.257332120" Dec 01 03:28:00 crc kubenswrapper[4880]: I1201 03:28:00.815424 4880 generic.go:334] "Generic (PLEG): container finished" podID="c897e92c-5687-42f0-9467-03843beca9d4" containerID="63d451cb2eb1c0e5043e6a3bb60190076e444143db54d25511d22971901408a1" exitCode=0 Dec 01 03:28:00 crc kubenswrapper[4880]: I1201 03:28:00.815528 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" event={"ID":"c897e92c-5687-42f0-9467-03843beca9d4","Type":"ContainerDied","Data":"63d451cb2eb1c0e5043e6a3bb60190076e444143db54d25511d22971901408a1"} Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.328731 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.506119 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-inventory-0\") pod \"c897e92c-5687-42f0-9467-03843beca9d4\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.507079 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fftdf\" (UniqueName: \"kubernetes.io/projected/c897e92c-5687-42f0-9467-03843beca9d4-kube-api-access-fftdf\") pod \"c897e92c-5687-42f0-9467-03843beca9d4\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.508085 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-ssh-key-openstack-edpm-ipam\") pod \"c897e92c-5687-42f0-9467-03843beca9d4\" (UID: \"c897e92c-5687-42f0-9467-03843beca9d4\") " Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.517100 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c897e92c-5687-42f0-9467-03843beca9d4-kube-api-access-fftdf" (OuterVolumeSpecName: "kube-api-access-fftdf") pod "c897e92c-5687-42f0-9467-03843beca9d4" (UID: "c897e92c-5687-42f0-9467-03843beca9d4"). InnerVolumeSpecName "kube-api-access-fftdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.553773 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c897e92c-5687-42f0-9467-03843beca9d4" (UID: "c897e92c-5687-42f0-9467-03843beca9d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.558249 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c897e92c-5687-42f0-9467-03843beca9d4" (UID: "c897e92c-5687-42f0-9467-03843beca9d4"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.611970 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.612030 4880 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c897e92c-5687-42f0-9467-03843beca9d4-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.612050 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fftdf\" (UniqueName: \"kubernetes.io/projected/c897e92c-5687-42f0-9467-03843beca9d4-kube-api-access-fftdf\") on node \"crc\" DevicePath \"\"" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.840458 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" event={"ID":"c897e92c-5687-42f0-9467-03843beca9d4","Type":"ContainerDied","Data":"a1d3226f851bc76f3be3befa49f87a07a2eae33499ac44d695f031403363f7c3"} Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.840503 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1d3226f851bc76f3be3befa49f87a07a2eae33499ac44d695f031403363f7c3" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.840514 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-btd2n" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.939303 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh"] Dec 01 03:28:02 crc kubenswrapper[4880]: E1201 03:28:02.939775 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c897e92c-5687-42f0-9467-03843beca9d4" containerName="ssh-known-hosts-edpm-deployment" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.939797 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c897e92c-5687-42f0-9467-03843beca9d4" containerName="ssh-known-hosts-edpm-deployment" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.940046 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="c897e92c-5687-42f0-9467-03843beca9d4" containerName="ssh-known-hosts-edpm-deployment" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.940825 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.942853 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.943025 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.944236 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.951061 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:28:02 crc kubenswrapper[4880]: I1201 03:28:02.962682 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh"] Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.020800 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8zl6\" (UniqueName: \"kubernetes.io/projected/0681e7b0-6706-483e-9060-d5c727573e6f-kube-api-access-x8zl6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-knxjh\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.020948 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-knxjh\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.021014 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-knxjh\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.123712 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8zl6\" (UniqueName: \"kubernetes.io/projected/0681e7b0-6706-483e-9060-d5c727573e6f-kube-api-access-x8zl6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-knxjh\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.123808 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-knxjh\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.123885 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-knxjh\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.130004 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-knxjh\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.143250 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8zl6\" (UniqueName: \"kubernetes.io/projected/0681e7b0-6706-483e-9060-d5c727573e6f-kube-api-access-x8zl6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-knxjh\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.145374 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-knxjh\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.259690 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:03 crc kubenswrapper[4880]: I1201 03:28:03.872724 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh"] Dec 01 03:28:04 crc kubenswrapper[4880]: I1201 03:28:04.856547 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" event={"ID":"0681e7b0-6706-483e-9060-d5c727573e6f","Type":"ContainerStarted","Data":"655120bbecca93e296bdec8a4ba0fc7fc7771669444835bf817902b91be47cb5"} Dec 01 03:28:05 crc kubenswrapper[4880]: I1201 03:28:05.877634 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" event={"ID":"0681e7b0-6706-483e-9060-d5c727573e6f","Type":"ContainerStarted","Data":"d9d3c92f8104f689fbbc57bbfb7f8935d21a40b2b6accda6b4cb9ffe2378cb6a"} Dec 01 03:28:05 crc kubenswrapper[4880]: I1201 03:28:05.896233 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" podStartSLOduration=3.135117542 podStartE2EDuration="3.896214746s" podCreationTimestamp="2025-12-01 03:28:02 +0000 UTC" firstStartedPulling="2025-12-01 03:28:03.884138844 +0000 UTC m=+1913.395393226" lastFinishedPulling="2025-12-01 03:28:04.645236048 +0000 UTC m=+1914.156490430" observedRunningTime="2025-12-01 03:28:05.893196154 +0000 UTC m=+1915.404450576" watchObservedRunningTime="2025-12-01 03:28:05.896214746 +0000 UTC m=+1915.407469118" Dec 01 03:28:14 crc kubenswrapper[4880]: I1201 03:28:14.983065 4880 generic.go:334] "Generic (PLEG): container finished" podID="0681e7b0-6706-483e-9060-d5c727573e6f" containerID="d9d3c92f8104f689fbbc57bbfb7f8935d21a40b2b6accda6b4cb9ffe2378cb6a" exitCode=0 Dec 01 03:28:14 crc kubenswrapper[4880]: I1201 03:28:14.983156 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" event={"ID":"0681e7b0-6706-483e-9060-d5c727573e6f","Type":"ContainerDied","Data":"d9d3c92f8104f689fbbc57bbfb7f8935d21a40b2b6accda6b4cb9ffe2378cb6a"} Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.466808 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.537355 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-inventory\") pod \"0681e7b0-6706-483e-9060-d5c727573e6f\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.537719 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8zl6\" (UniqueName: \"kubernetes.io/projected/0681e7b0-6706-483e-9060-d5c727573e6f-kube-api-access-x8zl6\") pod \"0681e7b0-6706-483e-9060-d5c727573e6f\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.538230 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-ssh-key\") pod \"0681e7b0-6706-483e-9060-d5c727573e6f\" (UID: \"0681e7b0-6706-483e-9060-d5c727573e6f\") " Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.555186 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0681e7b0-6706-483e-9060-d5c727573e6f-kube-api-access-x8zl6" (OuterVolumeSpecName: "kube-api-access-x8zl6") pod "0681e7b0-6706-483e-9060-d5c727573e6f" (UID: "0681e7b0-6706-483e-9060-d5c727573e6f"). InnerVolumeSpecName "kube-api-access-x8zl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.574262 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0681e7b0-6706-483e-9060-d5c727573e6f" (UID: "0681e7b0-6706-483e-9060-d5c727573e6f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.578130 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-inventory" (OuterVolumeSpecName: "inventory") pod "0681e7b0-6706-483e-9060-d5c727573e6f" (UID: "0681e7b0-6706-483e-9060-d5c727573e6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.640526 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.640569 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0681e7b0-6706-483e-9060-d5c727573e6f-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:28:16 crc kubenswrapper[4880]: I1201 03:28:16.640581 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8zl6\" (UniqueName: \"kubernetes.io/projected/0681e7b0-6706-483e-9060-d5c727573e6f-kube-api-access-x8zl6\") on node \"crc\" DevicePath \"\"" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.002724 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" event={"ID":"0681e7b0-6706-483e-9060-d5c727573e6f","Type":"ContainerDied","Data":"655120bbecca93e296bdec8a4ba0fc7fc7771669444835bf817902b91be47cb5"} Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.002767 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-knxjh" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.002768 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="655120bbecca93e296bdec8a4ba0fc7fc7771669444835bf817902b91be47cb5" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.102452 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6"] Dec 01 03:28:17 crc kubenswrapper[4880]: E1201 03:28:17.103213 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0681e7b0-6706-483e-9060-d5c727573e6f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.103233 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="0681e7b0-6706-483e-9060-d5c727573e6f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.103483 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="0681e7b0-6706-483e-9060-d5c727573e6f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.104233 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.106292 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.106343 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.106389 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.111070 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.117454 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6"] Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.250511 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.250565 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzqq\" (UniqueName: \"kubernetes.io/projected/99e179c6-6282-4211-b32a-fbb090ad599d-kube-api-access-gkzqq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.251364 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.353179 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.353230 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkzqq\" (UniqueName: \"kubernetes.io/projected/99e179c6-6282-4211-b32a-fbb090ad599d-kube-api-access-gkzqq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.353298 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.359853 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.362393 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.370634 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkzqq\" (UniqueName: \"kubernetes.io/projected/99e179c6-6282-4211-b32a-fbb090ad599d-kube-api-access-gkzqq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:17 crc kubenswrapper[4880]: I1201 03:28:17.425638 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:18 crc kubenswrapper[4880]: I1201 03:28:18.044856 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6"] Dec 01 03:28:19 crc kubenswrapper[4880]: I1201 03:28:19.027118 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" event={"ID":"99e179c6-6282-4211-b32a-fbb090ad599d","Type":"ContainerStarted","Data":"0d4d2a67ea58934379c3494ed7321127c0febf617be152d7efa1da873d58f9fb"} Dec 01 03:28:20 crc kubenswrapper[4880]: I1201 03:28:20.037536 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" event={"ID":"99e179c6-6282-4211-b32a-fbb090ad599d","Type":"ContainerStarted","Data":"71d8ecf859d272519ecfb76d305aabfb08638a1d8a2cc544ebf485ee8e3a8a6b"} Dec 01 03:28:20 crc kubenswrapper[4880]: I1201 03:28:20.062022 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" podStartSLOduration=2.276669824 podStartE2EDuration="3.062003323s" podCreationTimestamp="2025-12-01 03:28:17 +0000 UTC" firstStartedPulling="2025-12-01 03:28:18.061398466 +0000 UTC m=+1927.572652848" lastFinishedPulling="2025-12-01 03:28:18.846731965 +0000 UTC m=+1928.357986347" observedRunningTime="2025-12-01 03:28:20.057222508 +0000 UTC m=+1929.568476880" watchObservedRunningTime="2025-12-01 03:28:20.062003323 +0000 UTC m=+1929.573257695" Dec 01 03:28:30 crc kubenswrapper[4880]: I1201 03:28:30.144478 4880 generic.go:334] "Generic (PLEG): container finished" podID="99e179c6-6282-4211-b32a-fbb090ad599d" containerID="71d8ecf859d272519ecfb76d305aabfb08638a1d8a2cc544ebf485ee8e3a8a6b" exitCode=0 Dec 01 03:28:30 crc kubenswrapper[4880]: I1201 03:28:30.144586 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" event={"ID":"99e179c6-6282-4211-b32a-fbb090ad599d","Type":"ContainerDied","Data":"71d8ecf859d272519ecfb76d305aabfb08638a1d8a2cc544ebf485ee8e3a8a6b"} Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.627004 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.774727 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-inventory\") pod \"99e179c6-6282-4211-b32a-fbb090ad599d\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.776741 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-ssh-key\") pod \"99e179c6-6282-4211-b32a-fbb090ad599d\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.776826 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkzqq\" (UniqueName: \"kubernetes.io/projected/99e179c6-6282-4211-b32a-fbb090ad599d-kube-api-access-gkzqq\") pod \"99e179c6-6282-4211-b32a-fbb090ad599d\" (UID: \"99e179c6-6282-4211-b32a-fbb090ad599d\") " Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.781900 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99e179c6-6282-4211-b32a-fbb090ad599d-kube-api-access-gkzqq" (OuterVolumeSpecName: "kube-api-access-gkzqq") pod "99e179c6-6282-4211-b32a-fbb090ad599d" (UID: "99e179c6-6282-4211-b32a-fbb090ad599d"). InnerVolumeSpecName "kube-api-access-gkzqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.813716 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99e179c6-6282-4211-b32a-fbb090ad599d" (UID: "99e179c6-6282-4211-b32a-fbb090ad599d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.840513 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-inventory" (OuterVolumeSpecName: "inventory") pod "99e179c6-6282-4211-b32a-fbb090ad599d" (UID: "99e179c6-6282-4211-b32a-fbb090ad599d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.878839 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.879155 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99e179c6-6282-4211-b32a-fbb090ad599d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:28:31 crc kubenswrapper[4880]: I1201 03:28:31.879169 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkzqq\" (UniqueName: \"kubernetes.io/projected/99e179c6-6282-4211-b32a-fbb090ad599d-kube-api-access-gkzqq\") on node \"crc\" DevicePath \"\"" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.171743 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" event={"ID":"99e179c6-6282-4211-b32a-fbb090ad599d","Type":"ContainerDied","Data":"0d4d2a67ea58934379c3494ed7321127c0febf617be152d7efa1da873d58f9fb"} Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.171798 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d4d2a67ea58934379c3494ed7321127c0febf617be152d7efa1da873d58f9fb" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.172006 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpnb6" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.275198 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg"] Dec 01 03:28:32 crc kubenswrapper[4880]: E1201 03:28:32.275633 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e179c6-6282-4211-b32a-fbb090ad599d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.275654 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e179c6-6282-4211-b32a-fbb090ad599d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.275862 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="99e179c6-6282-4211-b32a-fbb090ad599d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.276827 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.283989 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.284174 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.284299 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.284424 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.284554 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.284660 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.284767 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.285804 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.295767 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg"] Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.297952 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298034 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298079 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrbl2\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-kube-api-access-wrbl2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298107 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298127 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298158 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298183 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298207 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298234 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298264 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298282 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298308 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298329 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.298345 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.399602 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.399662 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.399706 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.399739 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.399762 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.399795 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.399915 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.399964 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrbl2\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-kube-api-access-wrbl2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.399998 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.400024 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.400064 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.400097 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.400124 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.400165 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.404522 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.405637 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.407939 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.408201 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.410495 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.410946 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.411521 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.411760 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.412126 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.412793 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.421481 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.421588 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.422135 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.424229 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrbl2\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-kube-api-access-wrbl2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:32 crc kubenswrapper[4880]: I1201 03:28:32.622911 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:28:33 crc kubenswrapper[4880]: I1201 03:28:33.254796 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg"] Dec 01 03:28:33 crc kubenswrapper[4880]: W1201 03:28:33.265594 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28d6eba1_5700_4ee7_8db8_7d1e07b068fd.slice/crio-412f9b750757366000940dc5b102148054877c86b081cc025973e9c1005d6dc1 WatchSource:0}: Error finding container 412f9b750757366000940dc5b102148054877c86b081cc025973e9c1005d6dc1: Status 404 returned error can't find the container with id 412f9b750757366000940dc5b102148054877c86b081cc025973e9c1005d6dc1 Dec 01 03:28:34 crc kubenswrapper[4880]: I1201 03:28:34.202096 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" event={"ID":"28d6eba1-5700-4ee7-8db8-7d1e07b068fd","Type":"ContainerStarted","Data":"19fc88d5b14c912a952cf2e8477cf79099261d4741cd7f7d070ccf184014ee78"} Dec 01 03:28:34 crc kubenswrapper[4880]: I1201 03:28:34.202588 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" event={"ID":"28d6eba1-5700-4ee7-8db8-7d1e07b068fd","Type":"ContainerStarted","Data":"412f9b750757366000940dc5b102148054877c86b081cc025973e9c1005d6dc1"} Dec 01 03:28:34 crc kubenswrapper[4880]: I1201 03:28:34.231897 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" podStartSLOduration=1.815938646 podStartE2EDuration="2.231866318s" podCreationTimestamp="2025-12-01 03:28:32 +0000 UTC" firstStartedPulling="2025-12-01 03:28:33.27020128 +0000 UTC m=+1942.781455652" lastFinishedPulling="2025-12-01 03:28:33.686128942 +0000 UTC m=+1943.197383324" observedRunningTime="2025-12-01 03:28:34.224569993 +0000 UTC m=+1943.735824375" watchObservedRunningTime="2025-12-01 03:28:34.231866318 +0000 UTC m=+1943.743120690" Dec 01 03:29:17 crc kubenswrapper[4880]: I1201 03:29:17.369216 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:29:17 crc kubenswrapper[4880]: I1201 03:29:17.370021 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:29:18 crc kubenswrapper[4880]: I1201 03:29:18.641646 4880 generic.go:334] "Generic (PLEG): container finished" podID="28d6eba1-5700-4ee7-8db8-7d1e07b068fd" containerID="19fc88d5b14c912a952cf2e8477cf79099261d4741cd7f7d070ccf184014ee78" exitCode=0 Dec 01 03:29:18 crc kubenswrapper[4880]: I1201 03:29:18.641760 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" event={"ID":"28d6eba1-5700-4ee7-8db8-7d1e07b068fd","Type":"ContainerDied","Data":"19fc88d5b14c912a952cf2e8477cf79099261d4741cd7f7d070ccf184014ee78"} Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.108353 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.170915 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-telemetry-combined-ca-bundle\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.171011 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ssh-key\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.171728 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.171783 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ovn-combined-ca-bundle\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.171812 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-libvirt-combined-ca-bundle\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.171935 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrbl2\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-kube-api-access-wrbl2\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.171970 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-repo-setup-combined-ca-bundle\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.172061 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-inventory\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.172128 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.172194 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.172230 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.172264 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-nova-combined-ca-bundle\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.172301 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-bootstrap-combined-ca-bundle\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.172350 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-neutron-metadata-combined-ca-bundle\") pod \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\" (UID: \"28d6eba1-5700-4ee7-8db8-7d1e07b068fd\") " Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.176957 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.178977 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.179085 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.183459 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-kube-api-access-wrbl2" (OuterVolumeSpecName: "kube-api-access-wrbl2") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "kube-api-access-wrbl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.183539 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.183794 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.183822 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.183983 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.184180 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.185191 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.189476 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.194380 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.212650 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-inventory" (OuterVolumeSpecName: "inventory") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.218110 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "28d6eba1-5700-4ee7-8db8-7d1e07b068fd" (UID: "28d6eba1-5700-4ee7-8db8-7d1e07b068fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.274892 4880 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.274929 4880 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.274945 4880 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.274959 4880 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.274973 4880 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.274987 4880 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.275000 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.275013 4880 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.275026 4880 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.275039 4880 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.275052 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrbl2\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-kube-api-access-wrbl2\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.275065 4880 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.275076 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.275088 4880 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/28d6eba1-5700-4ee7-8db8-7d1e07b068fd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.670486 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" event={"ID":"28d6eba1-5700-4ee7-8db8-7d1e07b068fd","Type":"ContainerDied","Data":"412f9b750757366000940dc5b102148054877c86b081cc025973e9c1005d6dc1"} Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.670528 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412f9b750757366000940dc5b102148054877c86b081cc025973e9c1005d6dc1" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.670579 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-q9sfg" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.775649 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s"] Dec 01 03:29:20 crc kubenswrapper[4880]: E1201 03:29:20.776630 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d6eba1-5700-4ee7-8db8-7d1e07b068fd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.776674 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d6eba1-5700-4ee7-8db8-7d1e07b068fd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.778492 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d6eba1-5700-4ee7-8db8-7d1e07b068fd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.779488 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.784155 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.784456 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.784603 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.786331 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.786471 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.825495 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s"] Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.885440 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.885572 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fbv\" (UniqueName: \"kubernetes.io/projected/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-kube-api-access-r5fbv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.885962 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.886142 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.886324 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.988644 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.988737 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.988819 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.989011 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.989102 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fbv\" (UniqueName: \"kubernetes.io/projected/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-kube-api-access-r5fbv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.990051 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.993613 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.995182 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:20 crc kubenswrapper[4880]: I1201 03:29:20.995603 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:21 crc kubenswrapper[4880]: I1201 03:29:21.011269 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fbv\" (UniqueName: \"kubernetes.io/projected/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-kube-api-access-r5fbv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-4ls7s\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:21 crc kubenswrapper[4880]: I1201 03:29:21.137025 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:29:21 crc kubenswrapper[4880]: I1201 03:29:21.496644 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s"] Dec 01 03:29:21 crc kubenswrapper[4880]: I1201 03:29:21.689482 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" event={"ID":"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6","Type":"ContainerStarted","Data":"6d701ec69ad1d28087da559b533dad95282d4c2776b53980a921877132666702"} Dec 01 03:29:22 crc kubenswrapper[4880]: I1201 03:29:22.698383 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" event={"ID":"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6","Type":"ContainerStarted","Data":"6350d67c0e6a9bc232b1809dbd0a40f9a6ac4086999f273090f139609dd129d4"} Dec 01 03:29:22 crc kubenswrapper[4880]: I1201 03:29:22.721795 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" podStartSLOduration=2.304183856 podStartE2EDuration="2.72177655s" podCreationTimestamp="2025-12-01 03:29:20 +0000 UTC" firstStartedPulling="2025-12-01 03:29:21.509294278 +0000 UTC m=+1991.020548660" lastFinishedPulling="2025-12-01 03:29:21.926886962 +0000 UTC m=+1991.438141354" observedRunningTime="2025-12-01 03:29:22.711983044 +0000 UTC m=+1992.223237416" watchObservedRunningTime="2025-12-01 03:29:22.72177655 +0000 UTC m=+1992.233030922" Dec 01 03:29:47 crc kubenswrapper[4880]: I1201 03:29:47.368867 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:29:47 crc kubenswrapper[4880]: I1201 03:29:47.369438 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.176744 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9"] Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.179370 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.182097 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.182639 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.187662 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9"] Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.239896 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a53464-da08-48b9-83fd-5f68c6dfe562-config-volume\") pod \"collect-profiles-29409330-n9cw9\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.239954 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a53464-da08-48b9-83fd-5f68c6dfe562-secret-volume\") pod \"collect-profiles-29409330-n9cw9\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.240003 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnpk8\" (UniqueName: \"kubernetes.io/projected/44a53464-da08-48b9-83fd-5f68c6dfe562-kube-api-access-fnpk8\") pod \"collect-profiles-29409330-n9cw9\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.341830 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a53464-da08-48b9-83fd-5f68c6dfe562-config-volume\") pod \"collect-profiles-29409330-n9cw9\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.341928 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a53464-da08-48b9-83fd-5f68c6dfe562-secret-volume\") pod \"collect-profiles-29409330-n9cw9\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.341961 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnpk8\" (UniqueName: \"kubernetes.io/projected/44a53464-da08-48b9-83fd-5f68c6dfe562-kube-api-access-fnpk8\") pod \"collect-profiles-29409330-n9cw9\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.342863 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a53464-da08-48b9-83fd-5f68c6dfe562-config-volume\") pod \"collect-profiles-29409330-n9cw9\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.350160 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a53464-da08-48b9-83fd-5f68c6dfe562-secret-volume\") pod \"collect-profiles-29409330-n9cw9\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.357200 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnpk8\" (UniqueName: \"kubernetes.io/projected/44a53464-da08-48b9-83fd-5f68c6dfe562-kube-api-access-fnpk8\") pod \"collect-profiles-29409330-n9cw9\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.504408 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:00 crc kubenswrapper[4880]: I1201 03:30:00.806288 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9"] Dec 01 03:30:00 crc kubenswrapper[4880]: W1201 03:30:00.815165 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44a53464_da08_48b9_83fd_5f68c6dfe562.slice/crio-30ba7534fdf9c0d10d30601f897568c1180cb24d920da0279f43c45fae67bbc2 WatchSource:0}: Error finding container 30ba7534fdf9c0d10d30601f897568c1180cb24d920da0279f43c45fae67bbc2: Status 404 returned error can't find the container with id 30ba7534fdf9c0d10d30601f897568c1180cb24d920da0279f43c45fae67bbc2 Dec 01 03:30:01 crc kubenswrapper[4880]: I1201 03:30:01.105152 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" event={"ID":"44a53464-da08-48b9-83fd-5f68c6dfe562","Type":"ContainerStarted","Data":"d8a2eb40a2ccf6ca09b9c5039d5d3e1582f1ee31b28113cdd56b55250e7a0e8e"} Dec 01 03:30:01 crc kubenswrapper[4880]: I1201 03:30:01.105555 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" event={"ID":"44a53464-da08-48b9-83fd-5f68c6dfe562","Type":"ContainerStarted","Data":"30ba7534fdf9c0d10d30601f897568c1180cb24d920da0279f43c45fae67bbc2"} Dec 01 03:30:01 crc kubenswrapper[4880]: I1201 03:30:01.164325 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" podStartSLOduration=1.164298946 podStartE2EDuration="1.164298946s" podCreationTimestamp="2025-12-01 03:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 03:30:01.122246835 +0000 UTC m=+2030.633501217" watchObservedRunningTime="2025-12-01 03:30:01.164298946 +0000 UTC m=+2030.675553328" Dec 01 03:30:02 crc kubenswrapper[4880]: I1201 03:30:02.117393 4880 generic.go:334] "Generic (PLEG): container finished" podID="44a53464-da08-48b9-83fd-5f68c6dfe562" containerID="d8a2eb40a2ccf6ca09b9c5039d5d3e1582f1ee31b28113cdd56b55250e7a0e8e" exitCode=0 Dec 01 03:30:02 crc kubenswrapper[4880]: I1201 03:30:02.117509 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" event={"ID":"44a53464-da08-48b9-83fd-5f68c6dfe562","Type":"ContainerDied","Data":"d8a2eb40a2ccf6ca09b9c5039d5d3e1582f1ee31b28113cdd56b55250e7a0e8e"} Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.516664 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.614958 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a53464-da08-48b9-83fd-5f68c6dfe562-secret-volume\") pod \"44a53464-da08-48b9-83fd-5f68c6dfe562\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.615207 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a53464-da08-48b9-83fd-5f68c6dfe562-config-volume\") pod \"44a53464-da08-48b9-83fd-5f68c6dfe562\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.615407 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnpk8\" (UniqueName: \"kubernetes.io/projected/44a53464-da08-48b9-83fd-5f68c6dfe562-kube-api-access-fnpk8\") pod \"44a53464-da08-48b9-83fd-5f68c6dfe562\" (UID: \"44a53464-da08-48b9-83fd-5f68c6dfe562\") " Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.616018 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a53464-da08-48b9-83fd-5f68c6dfe562-config-volume" (OuterVolumeSpecName: "config-volume") pod "44a53464-da08-48b9-83fd-5f68c6dfe562" (UID: "44a53464-da08-48b9-83fd-5f68c6dfe562"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.616253 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44a53464-da08-48b9-83fd-5f68c6dfe562-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.620637 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a53464-da08-48b9-83fd-5f68c6dfe562-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "44a53464-da08-48b9-83fd-5f68c6dfe562" (UID: "44a53464-da08-48b9-83fd-5f68c6dfe562"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.626153 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a53464-da08-48b9-83fd-5f68c6dfe562-kube-api-access-fnpk8" (OuterVolumeSpecName: "kube-api-access-fnpk8") pod "44a53464-da08-48b9-83fd-5f68c6dfe562" (UID: "44a53464-da08-48b9-83fd-5f68c6dfe562"). InnerVolumeSpecName "kube-api-access-fnpk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.719146 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnpk8\" (UniqueName: \"kubernetes.io/projected/44a53464-da08-48b9-83fd-5f68c6dfe562-kube-api-access-fnpk8\") on node \"crc\" DevicePath \"\"" Dec 01 03:30:03 crc kubenswrapper[4880]: I1201 03:30:03.719197 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/44a53464-da08-48b9-83fd-5f68c6dfe562-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 03:30:04 crc kubenswrapper[4880]: I1201 03:30:04.143433 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" event={"ID":"44a53464-da08-48b9-83fd-5f68c6dfe562","Type":"ContainerDied","Data":"30ba7534fdf9c0d10d30601f897568c1180cb24d920da0279f43c45fae67bbc2"} Dec 01 03:30:04 crc kubenswrapper[4880]: I1201 03:30:04.143680 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30ba7534fdf9c0d10d30601f897568c1180cb24d920da0279f43c45fae67bbc2" Dec 01 03:30:04 crc kubenswrapper[4880]: I1201 03:30:04.143595 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9" Dec 01 03:30:04 crc kubenswrapper[4880]: I1201 03:30:04.595160 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb"] Dec 01 03:30:04 crc kubenswrapper[4880]: I1201 03:30:04.614371 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409285-9cplb"] Dec 01 03:30:04 crc kubenswrapper[4880]: I1201 03:30:04.797358 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9fd9260-cfde-4ec1-8b3c-c757712369d6" path="/var/lib/kubelet/pods/d9fd9260-cfde-4ec1-8b3c-c757712369d6/volumes" Dec 01 03:30:17 crc kubenswrapper[4880]: I1201 03:30:17.368553 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:30:17 crc kubenswrapper[4880]: I1201 03:30:17.369031 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:30:17 crc kubenswrapper[4880]: I1201 03:30:17.369079 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:30:17 crc kubenswrapper[4880]: I1201 03:30:17.369579 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de0a73b1010be3c3f56109b45fc94aace1b7ec1e62e0fed3d697920f77540b99"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:30:17 crc kubenswrapper[4880]: I1201 03:30:17.369642 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://de0a73b1010be3c3f56109b45fc94aace1b7ec1e62e0fed3d697920f77540b99" gracePeriod=600 Dec 01 03:30:18 crc kubenswrapper[4880]: I1201 03:30:18.300728 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="de0a73b1010be3c3f56109b45fc94aace1b7ec1e62e0fed3d697920f77540b99" exitCode=0 Dec 01 03:30:18 crc kubenswrapper[4880]: I1201 03:30:18.300771 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"de0a73b1010be3c3f56109b45fc94aace1b7ec1e62e0fed3d697920f77540b99"} Dec 01 03:30:18 crc kubenswrapper[4880]: I1201 03:30:18.301498 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b"} Dec 01 03:30:18 crc kubenswrapper[4880]: I1201 03:30:18.301535 4880 scope.go:117] "RemoveContainer" containerID="76dafd8fc0476a2a8fc25353cf3efe088ef34465a507bf79fade85d01bca31f4" Dec 01 03:30:31 crc kubenswrapper[4880]: I1201 03:30:31.234373 4880 scope.go:117] "RemoveContainer" containerID="986f68b703de6eaef39b42051948c64a8455f000e22ac1e864d6516dd6aa5f1e" Dec 01 03:30:37 crc kubenswrapper[4880]: I1201 03:30:37.489510 4880 generic.go:334] "Generic (PLEG): container finished" podID="1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6" containerID="6350d67c0e6a9bc232b1809dbd0a40f9a6ac4086999f273090f139609dd129d4" exitCode=0 Dec 01 03:30:37 crc kubenswrapper[4880]: I1201 03:30:37.490060 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" event={"ID":"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6","Type":"ContainerDied","Data":"6350d67c0e6a9bc232b1809dbd0a40f9a6ac4086999f273090f139609dd129d4"} Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.090852 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.235513 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ssh-key\") pod \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.235563 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5fbv\" (UniqueName: \"kubernetes.io/projected/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-kube-api-access-r5fbv\") pod \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.235651 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovn-combined-ca-bundle\") pod \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.235703 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-inventory\") pod \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.235769 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovncontroller-config-0\") pod \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\" (UID: \"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6\") " Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.241119 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-kube-api-access-r5fbv" (OuterVolumeSpecName: "kube-api-access-r5fbv") pod "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6" (UID: "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6"). InnerVolumeSpecName "kube-api-access-r5fbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.252031 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6" (UID: "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.261381 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6" (UID: "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.262636 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-inventory" (OuterVolumeSpecName: "inventory") pod "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6" (UID: "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.289359 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6" (UID: "1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.338055 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.338382 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5fbv\" (UniqueName: \"kubernetes.io/projected/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-kube-api-access-r5fbv\") on node \"crc\" DevicePath \"\"" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.338526 4880 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.340094 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.340115 4880 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.513351 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" event={"ID":"1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6","Type":"ContainerDied","Data":"6d701ec69ad1d28087da559b533dad95282d4c2776b53980a921877132666702"} Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.513393 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d701ec69ad1d28087da559b533dad95282d4c2776b53980a921877132666702" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.513453 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-4ls7s" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.675109 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb"] Dec 01 03:30:39 crc kubenswrapper[4880]: E1201 03:30:39.676769 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a53464-da08-48b9-83fd-5f68c6dfe562" containerName="collect-profiles" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.676789 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a53464-da08-48b9-83fd-5f68c6dfe562" containerName="collect-profiles" Dec 01 03:30:39 crc kubenswrapper[4880]: E1201 03:30:39.676832 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.676839 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.678073 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf6a2bb-19ac-4d4e-a087-fdde55f7c3a6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.678104 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a53464-da08-48b9-83fd-5f68c6dfe562" containerName="collect-profiles" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.679257 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.692212 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.692232 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.692509 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.692654 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.692776 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.692925 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.694655 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb"] Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.849117 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.849213 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgf6p\" (UniqueName: \"kubernetes.io/projected/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-kube-api-access-xgf6p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.849282 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.849312 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.849362 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.849393 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.950709 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgf6p\" (UniqueName: \"kubernetes.io/projected/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-kube-api-access-xgf6p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.951278 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.951342 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.951431 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.951489 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.951576 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.955444 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.955817 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.956027 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.956566 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.958134 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:39 crc kubenswrapper[4880]: I1201 03:30:39.976630 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgf6p\" (UniqueName: \"kubernetes.io/projected/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-kube-api-access-xgf6p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:40 crc kubenswrapper[4880]: I1201 03:30:40.021453 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:30:40 crc kubenswrapper[4880]: I1201 03:30:40.388602 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb"] Dec 01 03:30:40 crc kubenswrapper[4880]: I1201 03:30:40.527525 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" event={"ID":"6c18f9fb-de57-425d-8c3d-5b15d8fb3637","Type":"ContainerStarted","Data":"2e5d0bf3447818d18946d3859aa3460aabcee1ed1ae5f36c3f5815a8ea7b73a3"} Dec 01 03:30:41 crc kubenswrapper[4880]: I1201 03:30:41.538226 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" event={"ID":"6c18f9fb-de57-425d-8c3d-5b15d8fb3637","Type":"ContainerStarted","Data":"4a0f8f2a1edab73da33cc6fb78a2d0cbf34ec7a3723f7a254f0d4a6543c3c86a"} Dec 01 03:30:41 crc kubenswrapper[4880]: I1201 03:30:41.562504 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" podStartSLOduration=1.963280847 podStartE2EDuration="2.562481878s" podCreationTimestamp="2025-12-01 03:30:39 +0000 UTC" firstStartedPulling="2025-12-01 03:30:40.394282271 +0000 UTC m=+2069.905536653" lastFinishedPulling="2025-12-01 03:30:40.993483282 +0000 UTC m=+2070.504737684" observedRunningTime="2025-12-01 03:30:41.554213749 +0000 UTC m=+2071.065468121" watchObservedRunningTime="2025-12-01 03:30:41.562481878 +0000 UTC m=+2071.073736250" Dec 01 03:31:38 crc kubenswrapper[4880]: I1201 03:31:38.122091 4880 generic.go:334] "Generic (PLEG): container finished" podID="6c18f9fb-de57-425d-8c3d-5b15d8fb3637" containerID="4a0f8f2a1edab73da33cc6fb78a2d0cbf34ec7a3723f7a254f0d4a6543c3c86a" exitCode=0 Dec 01 03:31:38 crc kubenswrapper[4880]: I1201 03:31:38.122195 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" event={"ID":"6c18f9fb-de57-425d-8c3d-5b15d8fb3637","Type":"ContainerDied","Data":"4a0f8f2a1edab73da33cc6fb78a2d0cbf34ec7a3723f7a254f0d4a6543c3c86a"} Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.553417 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.660367 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-nova-metadata-neutron-config-0\") pod \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.660399 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-metadata-combined-ca-bundle\") pod \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.660456 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgf6p\" (UniqueName: \"kubernetes.io/projected/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-kube-api-access-xgf6p\") pod \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.660538 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-ovn-metadata-agent-neutron-config-0\") pod \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.660614 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-ssh-key\") pod \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.660652 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-inventory\") pod \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\" (UID: \"6c18f9fb-de57-425d-8c3d-5b15d8fb3637\") " Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.666885 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6c18f9fb-de57-425d-8c3d-5b15d8fb3637" (UID: "6c18f9fb-de57-425d-8c3d-5b15d8fb3637"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.676821 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-kube-api-access-xgf6p" (OuterVolumeSpecName: "kube-api-access-xgf6p") pod "6c18f9fb-de57-425d-8c3d-5b15d8fb3637" (UID: "6c18f9fb-de57-425d-8c3d-5b15d8fb3637"). InnerVolumeSpecName "kube-api-access-xgf6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.700474 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-inventory" (OuterVolumeSpecName: "inventory") pod "6c18f9fb-de57-425d-8c3d-5b15d8fb3637" (UID: "6c18f9fb-de57-425d-8c3d-5b15d8fb3637"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.705299 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "6c18f9fb-de57-425d-8c3d-5b15d8fb3637" (UID: "6c18f9fb-de57-425d-8c3d-5b15d8fb3637"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.710330 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6c18f9fb-de57-425d-8c3d-5b15d8fb3637" (UID: "6c18f9fb-de57-425d-8c3d-5b15d8fb3637"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.717254 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "6c18f9fb-de57-425d-8c3d-5b15d8fb3637" (UID: "6c18f9fb-de57-425d-8c3d-5b15d8fb3637"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.762593 4880 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.762617 4880 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.762627 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgf6p\" (UniqueName: \"kubernetes.io/projected/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-kube-api-access-xgf6p\") on node \"crc\" DevicePath \"\"" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.762638 4880 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.762649 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:31:39 crc kubenswrapper[4880]: I1201 03:31:39.762658 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c18f9fb-de57-425d-8c3d-5b15d8fb3637-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.842370 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.894055 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86"] Dec 01 03:31:40 crc kubenswrapper[4880]: E1201 03:31:40.894854 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c18f9fb-de57-425d-8c3d-5b15d8fb3637" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.894887 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c18f9fb-de57-425d-8c3d-5b15d8fb3637" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.895259 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c18f9fb-de57-425d-8c3d-5b15d8fb3637" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.896080 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6whlb" event={"ID":"6c18f9fb-de57-425d-8c3d-5b15d8fb3637","Type":"ContainerDied","Data":"2e5d0bf3447818d18946d3859aa3460aabcee1ed1ae5f36c3f5815a8ea7b73a3"} Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.896112 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5d0bf3447818d18946d3859aa3460aabcee1ed1ae5f36c3f5815a8ea7b73a3" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.896126 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86"] Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.896209 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.899212 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.899991 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.900284 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.900862 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:31:40 crc kubenswrapper[4880]: I1201 03:31:40.900916 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.004011 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.004061 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk852\" (UniqueName: \"kubernetes.io/projected/8c13471c-49c7-422e-9062-a738c339c136-kube-api-access-fk852\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.004095 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.004230 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.004296 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.105726 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.105804 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.105843 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk852\" (UniqueName: \"kubernetes.io/projected/8c13471c-49c7-422e-9062-a738c339c136-kube-api-access-fk852\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.105886 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.105977 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.110521 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.110776 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.111334 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.111847 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.127721 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk852\" (UniqueName: \"kubernetes.io/projected/8c13471c-49c7-422e-9062-a738c339c136-kube-api-access-fk852\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xcs86\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.236054 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:31:41 crc kubenswrapper[4880]: I1201 03:31:41.854802 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86"] Dec 01 03:31:42 crc kubenswrapper[4880]: I1201 03:31:42.865723 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" event={"ID":"8c13471c-49c7-422e-9062-a738c339c136","Type":"ContainerStarted","Data":"8c4a4a72fd660fdda4d3ffa855cf07b25da4e9963094cc7aea07ec55d6443cf8"} Dec 01 03:31:42 crc kubenswrapper[4880]: I1201 03:31:42.866073 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" event={"ID":"8c13471c-49c7-422e-9062-a738c339c136","Type":"ContainerStarted","Data":"c0154f5be64f92176fb8b9b7e217a93589a18b4d9963b91a1fc3feb71c906266"} Dec 01 03:31:42 crc kubenswrapper[4880]: I1201 03:31:42.902794 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" podStartSLOduration=2.455803264 podStartE2EDuration="2.902763265s" podCreationTimestamp="2025-12-01 03:31:40 +0000 UTC" firstStartedPulling="2025-12-01 03:31:41.860934689 +0000 UTC m=+2131.372189061" lastFinishedPulling="2025-12-01 03:31:42.30789468 +0000 UTC m=+2131.819149062" observedRunningTime="2025-12-01 03:31:42.89439085 +0000 UTC m=+2132.405645262" watchObservedRunningTime="2025-12-01 03:31:42.902763265 +0000 UTC m=+2132.414017667" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.275808 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2rzww"] Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.278146 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.310381 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rzww"] Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.362980 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-catalog-content\") pod \"redhat-operators-2rzww\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.363056 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngg7p\" (UniqueName: \"kubernetes.io/projected/812e3245-0438-439f-ad41-44bbf31fa7d5-kube-api-access-ngg7p\") pod \"redhat-operators-2rzww\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.363094 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-utilities\") pod \"redhat-operators-2rzww\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.464523 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-catalog-content\") pod \"redhat-operators-2rzww\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.464584 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngg7p\" (UniqueName: \"kubernetes.io/projected/812e3245-0438-439f-ad41-44bbf31fa7d5-kube-api-access-ngg7p\") pod \"redhat-operators-2rzww\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.464621 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-utilities\") pod \"redhat-operators-2rzww\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.465048 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-catalog-content\") pod \"redhat-operators-2rzww\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.465167 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-utilities\") pod \"redhat-operators-2rzww\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.500175 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngg7p\" (UniqueName: \"kubernetes.io/projected/812e3245-0438-439f-ad41-44bbf31fa7d5-kube-api-access-ngg7p\") pod \"redhat-operators-2rzww\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:02 crc kubenswrapper[4880]: I1201 03:32:02.602344 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:03 crc kubenswrapper[4880]: I1201 03:32:03.057894 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rzww"] Dec 01 03:32:03 crc kubenswrapper[4880]: I1201 03:32:03.102155 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rzww" event={"ID":"812e3245-0438-439f-ad41-44bbf31fa7d5","Type":"ContainerStarted","Data":"8850332358ff2a127563d226c323cde063e9d3dfc969e7a7371da930be8c3dc7"} Dec 01 03:32:04 crc kubenswrapper[4880]: I1201 03:32:04.115170 4880 generic.go:334] "Generic (PLEG): container finished" podID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerID="69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf" exitCode=0 Dec 01 03:32:04 crc kubenswrapper[4880]: I1201 03:32:04.115276 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rzww" event={"ID":"812e3245-0438-439f-ad41-44bbf31fa7d5","Type":"ContainerDied","Data":"69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf"} Dec 01 03:32:06 crc kubenswrapper[4880]: I1201 03:32:06.142916 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rzww" event={"ID":"812e3245-0438-439f-ad41-44bbf31fa7d5","Type":"ContainerStarted","Data":"a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd"} Dec 01 03:32:09 crc kubenswrapper[4880]: I1201 03:32:09.168427 4880 generic.go:334] "Generic (PLEG): container finished" podID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerID="a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd" exitCode=0 Dec 01 03:32:09 crc kubenswrapper[4880]: I1201 03:32:09.168470 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rzww" event={"ID":"812e3245-0438-439f-ad41-44bbf31fa7d5","Type":"ContainerDied","Data":"a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd"} Dec 01 03:32:10 crc kubenswrapper[4880]: I1201 03:32:10.186176 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rzww" event={"ID":"812e3245-0438-439f-ad41-44bbf31fa7d5","Type":"ContainerStarted","Data":"09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741"} Dec 01 03:32:10 crc kubenswrapper[4880]: I1201 03:32:10.209030 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2rzww" podStartSLOduration=2.667853376 podStartE2EDuration="8.209013611s" podCreationTimestamp="2025-12-01 03:32:02 +0000 UTC" firstStartedPulling="2025-12-01 03:32:04.117774418 +0000 UTC m=+2153.629028830" lastFinishedPulling="2025-12-01 03:32:09.658934683 +0000 UTC m=+2159.170189065" observedRunningTime="2025-12-01 03:32:10.204720456 +0000 UTC m=+2159.715974888" watchObservedRunningTime="2025-12-01 03:32:10.209013611 +0000 UTC m=+2159.720267983" Dec 01 03:32:12 crc kubenswrapper[4880]: I1201 03:32:12.603857 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:12 crc kubenswrapper[4880]: I1201 03:32:12.604100 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:13 crc kubenswrapper[4880]: I1201 03:32:13.660961 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2rzww" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerName="registry-server" probeResult="failure" output=< Dec 01 03:32:13 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 03:32:13 crc kubenswrapper[4880]: > Dec 01 03:32:17 crc kubenswrapper[4880]: I1201 03:32:17.369337 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:32:17 crc kubenswrapper[4880]: I1201 03:32:17.369960 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:32:22 crc kubenswrapper[4880]: I1201 03:32:22.660047 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:22 crc kubenswrapper[4880]: I1201 03:32:22.707887 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:22 crc kubenswrapper[4880]: I1201 03:32:22.905790 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rzww"] Dec 01 03:32:24 crc kubenswrapper[4880]: I1201 03:32:24.322309 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2rzww" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerName="registry-server" containerID="cri-o://09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741" gracePeriod=2 Dec 01 03:32:24 crc kubenswrapper[4880]: I1201 03:32:24.795627 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:24 crc kubenswrapper[4880]: I1201 03:32:24.948659 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-utilities\") pod \"812e3245-0438-439f-ad41-44bbf31fa7d5\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " Dec 01 03:32:24 crc kubenswrapper[4880]: I1201 03:32:24.948956 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngg7p\" (UniqueName: \"kubernetes.io/projected/812e3245-0438-439f-ad41-44bbf31fa7d5-kube-api-access-ngg7p\") pod \"812e3245-0438-439f-ad41-44bbf31fa7d5\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " Dec 01 03:32:24 crc kubenswrapper[4880]: I1201 03:32:24.949933 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-catalog-content\") pod \"812e3245-0438-439f-ad41-44bbf31fa7d5\" (UID: \"812e3245-0438-439f-ad41-44bbf31fa7d5\") " Dec 01 03:32:24 crc kubenswrapper[4880]: I1201 03:32:24.950297 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-utilities" (OuterVolumeSpecName: "utilities") pod "812e3245-0438-439f-ad41-44bbf31fa7d5" (UID: "812e3245-0438-439f-ad41-44bbf31fa7d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:32:24 crc kubenswrapper[4880]: I1201 03:32:24.955050 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812e3245-0438-439f-ad41-44bbf31fa7d5-kube-api-access-ngg7p" (OuterVolumeSpecName: "kube-api-access-ngg7p") pod "812e3245-0438-439f-ad41-44bbf31fa7d5" (UID: "812e3245-0438-439f-ad41-44bbf31fa7d5"). InnerVolumeSpecName "kube-api-access-ngg7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.051477 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.051532 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngg7p\" (UniqueName: \"kubernetes.io/projected/812e3245-0438-439f-ad41-44bbf31fa7d5-kube-api-access-ngg7p\") on node \"crc\" DevicePath \"\"" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.085480 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "812e3245-0438-439f-ad41-44bbf31fa7d5" (UID: "812e3245-0438-439f-ad41-44bbf31fa7d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.152948 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812e3245-0438-439f-ad41-44bbf31fa7d5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.340694 4880 generic.go:334] "Generic (PLEG): container finished" podID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerID="09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741" exitCode=0 Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.340761 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rzww" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.340770 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rzww" event={"ID":"812e3245-0438-439f-ad41-44bbf31fa7d5","Type":"ContainerDied","Data":"09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741"} Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.340851 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rzww" event={"ID":"812e3245-0438-439f-ad41-44bbf31fa7d5","Type":"ContainerDied","Data":"8850332358ff2a127563d226c323cde063e9d3dfc969e7a7371da930be8c3dc7"} Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.340970 4880 scope.go:117] "RemoveContainer" containerID="09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.411071 4880 scope.go:117] "RemoveContainer" containerID="a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.417819 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rzww"] Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.431775 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2rzww"] Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.438441 4880 scope.go:117] "RemoveContainer" containerID="69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.495861 4880 scope.go:117] "RemoveContainer" containerID="09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741" Dec 01 03:32:25 crc kubenswrapper[4880]: E1201 03:32:25.496503 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741\": container with ID starting with 09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741 not found: ID does not exist" containerID="09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.496572 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741"} err="failed to get container status \"09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741\": rpc error: code = NotFound desc = could not find container \"09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741\": container with ID starting with 09d54625765a98d1597a29e0df8489c59845c193b4517d87c6accf1520873741 not found: ID does not exist" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.496605 4880 scope.go:117] "RemoveContainer" containerID="a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd" Dec 01 03:32:25 crc kubenswrapper[4880]: E1201 03:32:25.497543 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd\": container with ID starting with a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd not found: ID does not exist" containerID="a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.497595 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd"} err="failed to get container status \"a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd\": rpc error: code = NotFound desc = could not find container \"a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd\": container with ID starting with a5c8aa41c47494bb0585998d0cc34f13935df4a5ef9d6bb6e7a9493dff9472bd not found: ID does not exist" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.497631 4880 scope.go:117] "RemoveContainer" containerID="69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf" Dec 01 03:32:25 crc kubenswrapper[4880]: E1201 03:32:25.498097 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf\": container with ID starting with 69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf not found: ID does not exist" containerID="69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf" Dec 01 03:32:25 crc kubenswrapper[4880]: I1201 03:32:25.498127 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf"} err="failed to get container status \"69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf\": rpc error: code = NotFound desc = could not find container \"69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf\": container with ID starting with 69fff349b80e222cfafe50c1da52732f620d67893bc9fe00b77a1d60452809bf not found: ID does not exist" Dec 01 03:32:26 crc kubenswrapper[4880]: I1201 03:32:26.805266 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" path="/var/lib/kubelet/pods/812e3245-0438-439f-ad41-44bbf31fa7d5/volumes" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.092276 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l9hwg"] Dec 01 03:32:39 crc kubenswrapper[4880]: E1201 03:32:39.096753 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerName="extract-content" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.096805 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerName="extract-content" Dec 01 03:32:39 crc kubenswrapper[4880]: E1201 03:32:39.096847 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerName="registry-server" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.096856 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerName="registry-server" Dec 01 03:32:39 crc kubenswrapper[4880]: E1201 03:32:39.096907 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerName="extract-utilities" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.096920 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerName="extract-utilities" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.097308 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="812e3245-0438-439f-ad41-44bbf31fa7d5" containerName="registry-server" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.099113 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.104942 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9hwg"] Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.240170 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-utilities\") pod \"redhat-marketplace-l9hwg\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.240218 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-catalog-content\") pod \"redhat-marketplace-l9hwg\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.240239 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b4gc\" (UniqueName: \"kubernetes.io/projected/5df0263b-878f-4992-abfc-599740c0d413-kube-api-access-4b4gc\") pod \"redhat-marketplace-l9hwg\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.341818 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-utilities\") pod \"redhat-marketplace-l9hwg\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.342125 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-catalog-content\") pod \"redhat-marketplace-l9hwg\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.342597 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b4gc\" (UniqueName: \"kubernetes.io/projected/5df0263b-878f-4992-abfc-599740c0d413-kube-api-access-4b4gc\") pod \"redhat-marketplace-l9hwg\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.342559 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-catalog-content\") pod \"redhat-marketplace-l9hwg\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.342463 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-utilities\") pod \"redhat-marketplace-l9hwg\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.362771 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b4gc\" (UniqueName: \"kubernetes.io/projected/5df0263b-878f-4992-abfc-599740c0d413-kube-api-access-4b4gc\") pod \"redhat-marketplace-l9hwg\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.417765 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:39 crc kubenswrapper[4880]: I1201 03:32:39.920790 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9hwg"] Dec 01 03:32:40 crc kubenswrapper[4880]: I1201 03:32:40.531054 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9hwg" event={"ID":"5df0263b-878f-4992-abfc-599740c0d413","Type":"ContainerStarted","Data":"480ab172bc3bf7bea155e7dcaeb61a59f9a224d3ce5a1d3e5ed9a2b7dda26b28"} Dec 01 03:32:41 crc kubenswrapper[4880]: I1201 03:32:41.546956 4880 generic.go:334] "Generic (PLEG): container finished" podID="5df0263b-878f-4992-abfc-599740c0d413" containerID="83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8" exitCode=0 Dec 01 03:32:41 crc kubenswrapper[4880]: I1201 03:32:41.547115 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9hwg" event={"ID":"5df0263b-878f-4992-abfc-599740c0d413","Type":"ContainerDied","Data":"83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8"} Dec 01 03:32:42 crc kubenswrapper[4880]: I1201 03:32:42.556738 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9hwg" event={"ID":"5df0263b-878f-4992-abfc-599740c0d413","Type":"ContainerStarted","Data":"1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4"} Dec 01 03:32:43 crc kubenswrapper[4880]: I1201 03:32:43.568976 4880 generic.go:334] "Generic (PLEG): container finished" podID="5df0263b-878f-4992-abfc-599740c0d413" containerID="1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4" exitCode=0 Dec 01 03:32:43 crc kubenswrapper[4880]: I1201 03:32:43.569023 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9hwg" event={"ID":"5df0263b-878f-4992-abfc-599740c0d413","Type":"ContainerDied","Data":"1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4"} Dec 01 03:32:44 crc kubenswrapper[4880]: I1201 03:32:44.577509 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9hwg" event={"ID":"5df0263b-878f-4992-abfc-599740c0d413","Type":"ContainerStarted","Data":"52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2"} Dec 01 03:32:44 crc kubenswrapper[4880]: I1201 03:32:44.599530 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l9hwg" podStartSLOduration=3.135243861 podStartE2EDuration="5.599511819s" podCreationTimestamp="2025-12-01 03:32:39 +0000 UTC" firstStartedPulling="2025-12-01 03:32:41.549664464 +0000 UTC m=+2191.060918866" lastFinishedPulling="2025-12-01 03:32:44.013932412 +0000 UTC m=+2193.525186824" observedRunningTime="2025-12-01 03:32:44.592704002 +0000 UTC m=+2194.103958384" watchObservedRunningTime="2025-12-01 03:32:44.599511819 +0000 UTC m=+2194.110766191" Dec 01 03:32:47 crc kubenswrapper[4880]: I1201 03:32:47.369219 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:32:47 crc kubenswrapper[4880]: I1201 03:32:47.369956 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:32:49 crc kubenswrapper[4880]: I1201 03:32:49.419058 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:49 crc kubenswrapper[4880]: I1201 03:32:49.421401 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:49 crc kubenswrapper[4880]: I1201 03:32:49.484076 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:49 crc kubenswrapper[4880]: I1201 03:32:49.677679 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:49 crc kubenswrapper[4880]: I1201 03:32:49.746422 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9hwg"] Dec 01 03:32:51 crc kubenswrapper[4880]: I1201 03:32:51.650301 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l9hwg" podUID="5df0263b-878f-4992-abfc-599740c0d413" containerName="registry-server" containerID="cri-o://52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2" gracePeriod=2 Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.131562 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.265647 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-catalog-content\") pod \"5df0263b-878f-4992-abfc-599740c0d413\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.265781 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b4gc\" (UniqueName: \"kubernetes.io/projected/5df0263b-878f-4992-abfc-599740c0d413-kube-api-access-4b4gc\") pod \"5df0263b-878f-4992-abfc-599740c0d413\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.265804 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-utilities\") pod \"5df0263b-878f-4992-abfc-599740c0d413\" (UID: \"5df0263b-878f-4992-abfc-599740c0d413\") " Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.267073 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-utilities" (OuterVolumeSpecName: "utilities") pod "5df0263b-878f-4992-abfc-599740c0d413" (UID: "5df0263b-878f-4992-abfc-599740c0d413"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.275150 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df0263b-878f-4992-abfc-599740c0d413-kube-api-access-4b4gc" (OuterVolumeSpecName: "kube-api-access-4b4gc") pod "5df0263b-878f-4992-abfc-599740c0d413" (UID: "5df0263b-878f-4992-abfc-599740c0d413"). InnerVolumeSpecName "kube-api-access-4b4gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.289781 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5df0263b-878f-4992-abfc-599740c0d413" (UID: "5df0263b-878f-4992-abfc-599740c0d413"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.367408 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.367436 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b4gc\" (UniqueName: \"kubernetes.io/projected/5df0263b-878f-4992-abfc-599740c0d413-kube-api-access-4b4gc\") on node \"crc\" DevicePath \"\"" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.367445 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5df0263b-878f-4992-abfc-599740c0d413-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.662320 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9hwg" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.662311 4880 generic.go:334] "Generic (PLEG): container finished" podID="5df0263b-878f-4992-abfc-599740c0d413" containerID="52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2" exitCode=0 Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.665052 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9hwg" event={"ID":"5df0263b-878f-4992-abfc-599740c0d413","Type":"ContainerDied","Data":"52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2"} Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.665356 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9hwg" event={"ID":"5df0263b-878f-4992-abfc-599740c0d413","Type":"ContainerDied","Data":"480ab172bc3bf7bea155e7dcaeb61a59f9a224d3ce5a1d3e5ed9a2b7dda26b28"} Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.665455 4880 scope.go:117] "RemoveContainer" containerID="52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.704652 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9hwg"] Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.706550 4880 scope.go:117] "RemoveContainer" containerID="1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.718414 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9hwg"] Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.731612 4880 scope.go:117] "RemoveContainer" containerID="83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.774769 4880 scope.go:117] "RemoveContainer" containerID="52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2" Dec 01 03:32:52 crc kubenswrapper[4880]: E1201 03:32:52.775318 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2\": container with ID starting with 52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2 not found: ID does not exist" containerID="52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.775366 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2"} err="failed to get container status \"52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2\": rpc error: code = NotFound desc = could not find container \"52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2\": container with ID starting with 52025bdcc2ad1ad1ca147029b31bab1936b57a582b7702fbcaea216d54adfaa2 not found: ID does not exist" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.775403 4880 scope.go:117] "RemoveContainer" containerID="1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4" Dec 01 03:32:52 crc kubenswrapper[4880]: E1201 03:32:52.775810 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4\": container with ID starting with 1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4 not found: ID does not exist" containerID="1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.775838 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4"} err="failed to get container status \"1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4\": rpc error: code = NotFound desc = could not find container \"1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4\": container with ID starting with 1b67bd7346e66220bcc2297016d3deab909720fb3b643fcb88205f86bfa198a4 not found: ID does not exist" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.775856 4880 scope.go:117] "RemoveContainer" containerID="83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8" Dec 01 03:32:52 crc kubenswrapper[4880]: E1201 03:32:52.776169 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8\": container with ID starting with 83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8 not found: ID does not exist" containerID="83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.776198 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8"} err="failed to get container status \"83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8\": rpc error: code = NotFound desc = could not find container \"83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8\": container with ID starting with 83a645e30b3a97461acb39d76a9ca5c144d7378fb0ac01c54c0148e3ed31c0f8 not found: ID does not exist" Dec 01 03:32:52 crc kubenswrapper[4880]: I1201 03:32:52.799182 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df0263b-878f-4992-abfc-599740c0d413" path="/var/lib/kubelet/pods/5df0263b-878f-4992-abfc-599740c0d413/volumes" Dec 01 03:33:17 crc kubenswrapper[4880]: I1201 03:33:17.369031 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:33:17 crc kubenswrapper[4880]: I1201 03:33:17.369907 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:33:17 crc kubenswrapper[4880]: I1201 03:33:17.369996 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:33:17 crc kubenswrapper[4880]: I1201 03:33:17.371285 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:33:17 crc kubenswrapper[4880]: I1201 03:33:17.371383 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" gracePeriod=600 Dec 01 03:33:17 crc kubenswrapper[4880]: E1201 03:33:17.505533 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:33:17 crc kubenswrapper[4880]: I1201 03:33:17.949246 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" exitCode=0 Dec 01 03:33:17 crc kubenswrapper[4880]: I1201 03:33:17.949309 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b"} Dec 01 03:33:17 crc kubenswrapper[4880]: I1201 03:33:17.949356 4880 scope.go:117] "RemoveContainer" containerID="de0a73b1010be3c3f56109b45fc94aace1b7ec1e62e0fed3d697920f77540b99" Dec 01 03:33:17 crc kubenswrapper[4880]: I1201 03:33:17.950439 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:33:17 crc kubenswrapper[4880]: E1201 03:33:17.951200 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:33:31 crc kubenswrapper[4880]: I1201 03:33:31.784466 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:33:31 crc kubenswrapper[4880]: E1201 03:33:31.786164 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:33:45 crc kubenswrapper[4880]: I1201 03:33:45.785080 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:33:45 crc kubenswrapper[4880]: E1201 03:33:45.786486 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:33:58 crc kubenswrapper[4880]: I1201 03:33:58.785926 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:33:58 crc kubenswrapper[4880]: E1201 03:33:58.787037 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:34:13 crc kubenswrapper[4880]: I1201 03:34:13.785397 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:34:13 crc kubenswrapper[4880]: E1201 03:34:13.786594 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:34:28 crc kubenswrapper[4880]: I1201 03:34:28.785618 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:34:28 crc kubenswrapper[4880]: E1201 03:34:28.786940 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:34:42 crc kubenswrapper[4880]: I1201 03:34:42.786312 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:34:42 crc kubenswrapper[4880]: E1201 03:34:42.786885 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:34:57 crc kubenswrapper[4880]: I1201 03:34:57.784002 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:34:57 crc kubenswrapper[4880]: E1201 03:34:57.784805 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:35:08 crc kubenswrapper[4880]: I1201 03:35:08.787917 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:35:08 crc kubenswrapper[4880]: E1201 03:35:08.789201 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:35:19 crc kubenswrapper[4880]: I1201 03:35:19.783824 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:35:19 crc kubenswrapper[4880]: E1201 03:35:19.784927 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:35:30 crc kubenswrapper[4880]: I1201 03:35:30.809700 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:35:30 crc kubenswrapper[4880]: E1201 03:35:30.810825 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:35:45 crc kubenswrapper[4880]: I1201 03:35:45.785334 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:35:45 crc kubenswrapper[4880]: E1201 03:35:45.786108 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:35:59 crc kubenswrapper[4880]: I1201 03:35:59.784576 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:35:59 crc kubenswrapper[4880]: E1201 03:35:59.786449 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:36:13 crc kubenswrapper[4880]: I1201 03:36:13.784412 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:36:13 crc kubenswrapper[4880]: E1201 03:36:13.786397 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:36:24 crc kubenswrapper[4880]: I1201 03:36:24.784777 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:36:24 crc kubenswrapper[4880]: E1201 03:36:24.785542 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:36:29 crc kubenswrapper[4880]: I1201 03:36:29.176095 4880 generic.go:334] "Generic (PLEG): container finished" podID="8c13471c-49c7-422e-9062-a738c339c136" containerID="8c4a4a72fd660fdda4d3ffa855cf07b25da4e9963094cc7aea07ec55d6443cf8" exitCode=0 Dec 01 03:36:29 crc kubenswrapper[4880]: I1201 03:36:29.176469 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" event={"ID":"8c13471c-49c7-422e-9062-a738c339c136","Type":"ContainerDied","Data":"8c4a4a72fd660fdda4d3ffa855cf07b25da4e9963094cc7aea07ec55d6443cf8"} Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.574106 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.676334 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-inventory\") pod \"8c13471c-49c7-422e-9062-a738c339c136\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.676400 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-combined-ca-bundle\") pod \"8c13471c-49c7-422e-9062-a738c339c136\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.676515 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-secret-0\") pod \"8c13471c-49c7-422e-9062-a738c339c136\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.676705 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-ssh-key\") pod \"8c13471c-49c7-422e-9062-a738c339c136\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.676754 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk852\" (UniqueName: \"kubernetes.io/projected/8c13471c-49c7-422e-9062-a738c339c136-kube-api-access-fk852\") pod \"8c13471c-49c7-422e-9062-a738c339c136\" (UID: \"8c13471c-49c7-422e-9062-a738c339c136\") " Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.682966 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c13471c-49c7-422e-9062-a738c339c136-kube-api-access-fk852" (OuterVolumeSpecName: "kube-api-access-fk852") pod "8c13471c-49c7-422e-9062-a738c339c136" (UID: "8c13471c-49c7-422e-9062-a738c339c136"). InnerVolumeSpecName "kube-api-access-fk852". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.688551 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8c13471c-49c7-422e-9062-a738c339c136" (UID: "8c13471c-49c7-422e-9062-a738c339c136"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.713840 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-inventory" (OuterVolumeSpecName: "inventory") pod "8c13471c-49c7-422e-9062-a738c339c136" (UID: "8c13471c-49c7-422e-9062-a738c339c136"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.717325 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8c13471c-49c7-422e-9062-a738c339c136" (UID: "8c13471c-49c7-422e-9062-a738c339c136"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.729325 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8c13471c-49c7-422e-9062-a738c339c136" (UID: "8c13471c-49c7-422e-9062-a738c339c136"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.779837 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.780179 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk852\" (UniqueName: \"kubernetes.io/projected/8c13471c-49c7-422e-9062-a738c339c136-kube-api-access-fk852\") on node \"crc\" DevicePath \"\"" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.780297 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.780383 4880 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:36:30 crc kubenswrapper[4880]: I1201 03:36:30.780470 4880 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8c13471c-49c7-422e-9062-a738c339c136-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.195504 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" event={"ID":"8c13471c-49c7-422e-9062-a738c339c136","Type":"ContainerDied","Data":"c0154f5be64f92176fb8b9b7e217a93589a18b4d9963b91a1fc3feb71c906266"} Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.195550 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0154f5be64f92176fb8b9b7e217a93589a18b4d9963b91a1fc3feb71c906266" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.195636 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xcs86" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.342348 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx"] Dec 01 03:36:31 crc kubenswrapper[4880]: E1201 03:36:31.342710 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df0263b-878f-4992-abfc-599740c0d413" containerName="registry-server" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.342721 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df0263b-878f-4992-abfc-599740c0d413" containerName="registry-server" Dec 01 03:36:31 crc kubenswrapper[4880]: E1201 03:36:31.342743 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c13471c-49c7-422e-9062-a738c339c136" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.342750 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c13471c-49c7-422e-9062-a738c339c136" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 03:36:31 crc kubenswrapper[4880]: E1201 03:36:31.342769 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df0263b-878f-4992-abfc-599740c0d413" containerName="extract-utilities" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.342778 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df0263b-878f-4992-abfc-599740c0d413" containerName="extract-utilities" Dec 01 03:36:31 crc kubenswrapper[4880]: E1201 03:36:31.342793 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df0263b-878f-4992-abfc-599740c0d413" containerName="extract-content" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.342798 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df0263b-878f-4992-abfc-599740c0d413" containerName="extract-content" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.346464 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c13471c-49c7-422e-9062-a738c339c136" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.346552 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df0263b-878f-4992-abfc-599740c0d413" containerName="registry-server" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.347845 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.351274 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.353493 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.353647 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.353700 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.353996 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.354396 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.355381 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.361662 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx"] Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.394270 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.394344 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.394439 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.394478 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.394535 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.394646 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.394700 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.394974 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7f4\" (UniqueName: \"kubernetes.io/projected/4efc9650-fbc9-4ea8-843f-1e2e21f34296-kube-api-access-pm7f4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.395019 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.496864 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.497264 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.497331 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.497373 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.497412 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.497739 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm7f4\" (UniqueName: \"kubernetes.io/projected/4efc9650-fbc9-4ea8-843f-1e2e21f34296-kube-api-access-pm7f4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.498130 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.498188 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.498333 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.499519 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.503290 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.503358 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.503683 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.510216 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.515708 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.515736 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.518528 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.519673 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm7f4\" (UniqueName: \"kubernetes.io/projected/4efc9650-fbc9-4ea8-843f-1e2e21f34296-kube-api-access-pm7f4\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btfzx\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:31 crc kubenswrapper[4880]: I1201 03:36:31.673921 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:36:32 crc kubenswrapper[4880]: I1201 03:36:32.281237 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx"] Dec 01 03:36:32 crc kubenswrapper[4880]: I1201 03:36:32.290993 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 03:36:33 crc kubenswrapper[4880]: I1201 03:36:33.216815 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" event={"ID":"4efc9650-fbc9-4ea8-843f-1e2e21f34296","Type":"ContainerStarted","Data":"0812e5fe6424eb464d32ada17986738f88f2574c58b8b0f3b13ab794aafcb707"} Dec 01 03:36:33 crc kubenswrapper[4880]: I1201 03:36:33.217190 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" event={"ID":"4efc9650-fbc9-4ea8-843f-1e2e21f34296","Type":"ContainerStarted","Data":"d84cafcfec89897126988a2a1e6039e5fdc413145d4b91397deccc33ace992a1"} Dec 01 03:36:33 crc kubenswrapper[4880]: I1201 03:36:33.239433 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" podStartSLOduration=1.704370519 podStartE2EDuration="2.239417327s" podCreationTimestamp="2025-12-01 03:36:31 +0000 UTC" firstStartedPulling="2025-12-01 03:36:32.290700233 +0000 UTC m=+2421.801954615" lastFinishedPulling="2025-12-01 03:36:32.825747041 +0000 UTC m=+2422.337001423" observedRunningTime="2025-12-01 03:36:33.236003613 +0000 UTC m=+2422.747257985" watchObservedRunningTime="2025-12-01 03:36:33.239417327 +0000 UTC m=+2422.750671699" Dec 01 03:36:36 crc kubenswrapper[4880]: I1201 03:36:36.784982 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:36:36 crc kubenswrapper[4880]: E1201 03:36:36.785842 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:36:49 crc kubenswrapper[4880]: I1201 03:36:49.784100 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:36:49 crc kubenswrapper[4880]: E1201 03:36:49.784865 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:37:00 crc kubenswrapper[4880]: I1201 03:37:00.804801 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:37:00 crc kubenswrapper[4880]: E1201 03:37:00.806435 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:37:12 crc kubenswrapper[4880]: I1201 03:37:12.788160 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:37:12 crc kubenswrapper[4880]: E1201 03:37:12.789093 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:37:26 crc kubenswrapper[4880]: I1201 03:37:26.785308 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:37:26 crc kubenswrapper[4880]: E1201 03:37:26.786677 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:37:38 crc kubenswrapper[4880]: I1201 03:37:38.783953 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:37:38 crc kubenswrapper[4880]: E1201 03:37:38.784922 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:37:49 crc kubenswrapper[4880]: I1201 03:37:49.785771 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:37:49 crc kubenswrapper[4880]: E1201 03:37:49.786526 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:38:00 crc kubenswrapper[4880]: I1201 03:38:00.803398 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:38:00 crc kubenswrapper[4880]: E1201 03:38:00.805651 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:38:11 crc kubenswrapper[4880]: I1201 03:38:11.784328 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:38:11 crc kubenswrapper[4880]: E1201 03:38:11.785163 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:38:22 crc kubenswrapper[4880]: I1201 03:38:22.785616 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:38:23 crc kubenswrapper[4880]: I1201 03:38:23.263022 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"84d4567f8e44b9d8c3d5a522a1d30e6ab551d582fd74bdcfce5fce4ca3dcf52a"} Dec 01 03:39:51 crc kubenswrapper[4880]: I1201 03:39:51.197657 4880 generic.go:334] "Generic (PLEG): container finished" podID="4efc9650-fbc9-4ea8-843f-1e2e21f34296" containerID="0812e5fe6424eb464d32ada17986738f88f2574c58b8b0f3b13ab794aafcb707" exitCode=0 Dec 01 03:39:51 crc kubenswrapper[4880]: I1201 03:39:51.197888 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" event={"ID":"4efc9650-fbc9-4ea8-843f-1e2e21f34296","Type":"ContainerDied","Data":"0812e5fe6424eb464d32ada17986738f88f2574c58b8b0f3b13ab794aafcb707"} Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.666447 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.751834 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-ssh-key\") pod \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.751967 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-0\") pod \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.751989 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-combined-ca-bundle\") pod \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.752079 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-extra-config-0\") pod \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.752102 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-0\") pod \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.752144 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-1\") pod \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.752159 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-inventory\") pod \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.752191 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-1\") pod \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.752299 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm7f4\" (UniqueName: \"kubernetes.io/projected/4efc9650-fbc9-4ea8-843f-1e2e21f34296-kube-api-access-pm7f4\") pod \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\" (UID: \"4efc9650-fbc9-4ea8-843f-1e2e21f34296\") " Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.774274 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4efc9650-fbc9-4ea8-843f-1e2e21f34296-kube-api-access-pm7f4" (OuterVolumeSpecName: "kube-api-access-pm7f4") pod "4efc9650-fbc9-4ea8-843f-1e2e21f34296" (UID: "4efc9650-fbc9-4ea8-843f-1e2e21f34296"). InnerVolumeSpecName "kube-api-access-pm7f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.778579 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4efc9650-fbc9-4ea8-843f-1e2e21f34296" (UID: "4efc9650-fbc9-4ea8-843f-1e2e21f34296"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.779505 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "4efc9650-fbc9-4ea8-843f-1e2e21f34296" (UID: "4efc9650-fbc9-4ea8-843f-1e2e21f34296"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.781113 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "4efc9650-fbc9-4ea8-843f-1e2e21f34296" (UID: "4efc9650-fbc9-4ea8-843f-1e2e21f34296"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.785649 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-inventory" (OuterVolumeSpecName: "inventory") pod "4efc9650-fbc9-4ea8-843f-1e2e21f34296" (UID: "4efc9650-fbc9-4ea8-843f-1e2e21f34296"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.792359 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "4efc9650-fbc9-4ea8-843f-1e2e21f34296" (UID: "4efc9650-fbc9-4ea8-843f-1e2e21f34296"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.793246 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4efc9650-fbc9-4ea8-843f-1e2e21f34296" (UID: "4efc9650-fbc9-4ea8-843f-1e2e21f34296"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.808035 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "4efc9650-fbc9-4ea8-843f-1e2e21f34296" (UID: "4efc9650-fbc9-4ea8-843f-1e2e21f34296"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.816335 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "4efc9650-fbc9-4ea8-843f-1e2e21f34296" (UID: "4efc9650-fbc9-4ea8-843f-1e2e21f34296"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.854729 4880 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.854772 4880 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.854788 4880 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.854821 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.854836 4880 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.854850 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm7f4\" (UniqueName: \"kubernetes.io/projected/4efc9650-fbc9-4ea8-843f-1e2e21f34296-kube-api-access-pm7f4\") on node \"crc\" DevicePath \"\"" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.854861 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.854889 4880 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:39:52 crc kubenswrapper[4880]: I1201 03:39:52.854900 4880 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efc9650-fbc9-4ea8-843f-1e2e21f34296-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.225651 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" event={"ID":"4efc9650-fbc9-4ea8-843f-1e2e21f34296","Type":"ContainerDied","Data":"d84cafcfec89897126988a2a1e6039e5fdc413145d4b91397deccc33ace992a1"} Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.225718 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84cafcfec89897126988a2a1e6039e5fdc413145d4b91397deccc33ace992a1" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.225815 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btfzx" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.391611 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4"] Dec 01 03:39:53 crc kubenswrapper[4880]: E1201 03:39:53.392350 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efc9650-fbc9-4ea8-843f-1e2e21f34296" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.392363 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efc9650-fbc9-4ea8-843f-1e2e21f34296" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.392563 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efc9650-fbc9-4ea8-843f-1e2e21f34296" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.393243 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.397282 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.397463 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.397670 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-zdmnl" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.397784 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.398406 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.401564 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4"] Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.467364 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.467452 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.467555 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.467626 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7495\" (UniqueName: \"kubernetes.io/projected/2e68604b-db33-4c1e-acac-bda832f95b3d-kube-api-access-x7495\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.467651 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.467907 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.467947 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.570436 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.570487 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.570554 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.570576 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.570597 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.570621 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7495\" (UniqueName: \"kubernetes.io/projected/2e68604b-db33-4c1e-acac-bda832f95b3d-kube-api-access-x7495\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.570642 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.576449 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.576454 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.576665 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.576777 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.582091 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.584558 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.605896 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7495\" (UniqueName: \"kubernetes.io/projected/2e68604b-db33-4c1e-acac-bda832f95b3d-kube-api-access-x7495\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:53 crc kubenswrapper[4880]: I1201 03:39:53.721827 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:39:54 crc kubenswrapper[4880]: I1201 03:39:54.358261 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4"] Dec 01 03:39:55 crc kubenswrapper[4880]: I1201 03:39:55.245180 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" event={"ID":"2e68604b-db33-4c1e-acac-bda832f95b3d","Type":"ContainerStarted","Data":"fe1464495f6cfb93efbc86d9020f16b7e3b2a8d83a572090906146932b022d98"} Dec 01 03:39:55 crc kubenswrapper[4880]: I1201 03:39:55.245694 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" event={"ID":"2e68604b-db33-4c1e-acac-bda832f95b3d","Type":"ContainerStarted","Data":"b5c548b1d35df67e1a85d354d01f7d11a80d79eccceb27be89075febfbe53345"} Dec 01 03:39:55 crc kubenswrapper[4880]: I1201 03:39:55.276594 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" podStartSLOduration=1.685658282 podStartE2EDuration="2.276571901s" podCreationTimestamp="2025-12-01 03:39:53 +0000 UTC" firstStartedPulling="2025-12-01 03:39:54.374074819 +0000 UTC m=+2623.885329201" lastFinishedPulling="2025-12-01 03:39:54.964988448 +0000 UTC m=+2624.476242820" observedRunningTime="2025-12-01 03:39:55.269623369 +0000 UTC m=+2624.780877741" watchObservedRunningTime="2025-12-01 03:39:55.276571901 +0000 UTC m=+2624.787826273" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.215909 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gx522"] Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.218649 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.233910 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gx522"] Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.356952 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-catalog-content\") pod \"community-operators-gx522\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.357039 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mk9c\" (UniqueName: \"kubernetes.io/projected/590f7cc3-04d2-465d-8686-1f38f4519c1e-kube-api-access-4mk9c\") pod \"community-operators-gx522\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.357094 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-utilities\") pod \"community-operators-gx522\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.458886 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-utilities\") pod \"community-operators-gx522\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.458997 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-catalog-content\") pod \"community-operators-gx522\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.459055 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mk9c\" (UniqueName: \"kubernetes.io/projected/590f7cc3-04d2-465d-8686-1f38f4519c1e-kube-api-access-4mk9c\") pod \"community-operators-gx522\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.459665 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-utilities\") pod \"community-operators-gx522\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.459775 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-catalog-content\") pod \"community-operators-gx522\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.480941 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mk9c\" (UniqueName: \"kubernetes.io/projected/590f7cc3-04d2-465d-8686-1f38f4519c1e-kube-api-access-4mk9c\") pod \"community-operators-gx522\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:57 crc kubenswrapper[4880]: I1201 03:39:57.577896 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx522" Dec 01 03:39:58 crc kubenswrapper[4880]: I1201 03:39:58.129705 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gx522"] Dec 01 03:39:58 crc kubenswrapper[4880]: W1201 03:39:58.138471 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod590f7cc3_04d2_465d_8686_1f38f4519c1e.slice/crio-627cd63ddc62fcf0499da87c74f29a6ea9c91e8eb0590bcc27830b9dcdf3b9b0 WatchSource:0}: Error finding container 627cd63ddc62fcf0499da87c74f29a6ea9c91e8eb0590bcc27830b9dcdf3b9b0: Status 404 returned error can't find the container with id 627cd63ddc62fcf0499da87c74f29a6ea9c91e8eb0590bcc27830b9dcdf3b9b0 Dec 01 03:39:58 crc kubenswrapper[4880]: I1201 03:39:58.301551 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx522" event={"ID":"590f7cc3-04d2-465d-8686-1f38f4519c1e","Type":"ContainerStarted","Data":"627cd63ddc62fcf0499da87c74f29a6ea9c91e8eb0590bcc27830b9dcdf3b9b0"} Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.313393 4880 generic.go:334] "Generic (PLEG): container finished" podID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerID="17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467" exitCode=0 Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.313489 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx522" event={"ID":"590f7cc3-04d2-465d-8686-1f38f4519c1e","Type":"ContainerDied","Data":"17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467"} Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.622482 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9vd7l"] Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.625247 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.639266 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vd7l"] Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.701160 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-catalog-content\") pod \"certified-operators-9vd7l\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.701195 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-utilities\") pod \"certified-operators-9vd7l\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.701330 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2wq\" (UniqueName: \"kubernetes.io/projected/33c9573d-cc8b-40a4-89f2-d1feda4b7162-kube-api-access-xr2wq\") pod \"certified-operators-9vd7l\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.802665 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-catalog-content\") pod \"certified-operators-9vd7l\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.802707 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-utilities\") pod \"certified-operators-9vd7l\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.802832 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2wq\" (UniqueName: \"kubernetes.io/projected/33c9573d-cc8b-40a4-89f2-d1feda4b7162-kube-api-access-xr2wq\") pod \"certified-operators-9vd7l\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.803551 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-catalog-content\") pod \"certified-operators-9vd7l\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.803762 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-utilities\") pod \"certified-operators-9vd7l\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.823193 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2wq\" (UniqueName: \"kubernetes.io/projected/33c9573d-cc8b-40a4-89f2-d1feda4b7162-kube-api-access-xr2wq\") pod \"certified-operators-9vd7l\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:39:59 crc kubenswrapper[4880]: I1201 03:39:59.969767 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:40:00 crc kubenswrapper[4880]: I1201 03:40:00.495258 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vd7l"] Dec 01 03:40:01 crc kubenswrapper[4880]: I1201 03:40:01.338878 4880 generic.go:334] "Generic (PLEG): container finished" podID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerID="264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e" exitCode=0 Dec 01 03:40:01 crc kubenswrapper[4880]: I1201 03:40:01.339045 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vd7l" event={"ID":"33c9573d-cc8b-40a4-89f2-d1feda4b7162","Type":"ContainerDied","Data":"264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e"} Dec 01 03:40:01 crc kubenswrapper[4880]: I1201 03:40:01.339202 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vd7l" event={"ID":"33c9573d-cc8b-40a4-89f2-d1feda4b7162","Type":"ContainerStarted","Data":"699cf286ca880d07927d788bcc8f134d85e6336a5cbba4fab483325c35411998"} Dec 01 03:40:01 crc kubenswrapper[4880]: I1201 03:40:01.341456 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx522" event={"ID":"590f7cc3-04d2-465d-8686-1f38f4519c1e","Type":"ContainerStarted","Data":"4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2"} Dec 01 03:40:02 crc kubenswrapper[4880]: I1201 03:40:02.358280 4880 generic.go:334] "Generic (PLEG): container finished" podID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerID="4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2" exitCode=0 Dec 01 03:40:02 crc kubenswrapper[4880]: I1201 03:40:02.358390 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx522" event={"ID":"590f7cc3-04d2-465d-8686-1f38f4519c1e","Type":"ContainerDied","Data":"4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2"} Dec 01 03:40:03 crc kubenswrapper[4880]: I1201 03:40:03.369706 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vd7l" event={"ID":"33c9573d-cc8b-40a4-89f2-d1feda4b7162","Type":"ContainerStarted","Data":"097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5"} Dec 01 03:40:03 crc kubenswrapper[4880]: I1201 03:40:03.372451 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx522" event={"ID":"590f7cc3-04d2-465d-8686-1f38f4519c1e","Type":"ContainerStarted","Data":"603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d"} Dec 01 03:40:03 crc kubenswrapper[4880]: I1201 03:40:03.426127 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gx522" podStartSLOduration=2.944671218 podStartE2EDuration="6.426110168s" podCreationTimestamp="2025-12-01 03:39:57 +0000 UTC" firstStartedPulling="2025-12-01 03:39:59.31657973 +0000 UTC m=+2628.827834102" lastFinishedPulling="2025-12-01 03:40:02.79801869 +0000 UTC m=+2632.309273052" observedRunningTime="2025-12-01 03:40:03.418706445 +0000 UTC m=+2632.929960827" watchObservedRunningTime="2025-12-01 03:40:03.426110168 +0000 UTC m=+2632.937364530" Dec 01 03:40:04 crc kubenswrapper[4880]: I1201 03:40:04.382548 4880 generic.go:334] "Generic (PLEG): container finished" podID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerID="097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5" exitCode=0 Dec 01 03:40:04 crc kubenswrapper[4880]: I1201 03:40:04.382621 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vd7l" event={"ID":"33c9573d-cc8b-40a4-89f2-d1feda4b7162","Type":"ContainerDied","Data":"097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5"} Dec 01 03:40:05 crc kubenswrapper[4880]: I1201 03:40:05.392923 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vd7l" event={"ID":"33c9573d-cc8b-40a4-89f2-d1feda4b7162","Type":"ContainerStarted","Data":"13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be"} Dec 01 03:40:05 crc kubenswrapper[4880]: I1201 03:40:05.414641 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9vd7l" podStartSLOduration=2.740072575 podStartE2EDuration="6.414625119s" podCreationTimestamp="2025-12-01 03:39:59 +0000 UTC" firstStartedPulling="2025-12-01 03:40:01.340688271 +0000 UTC m=+2630.851942653" lastFinishedPulling="2025-12-01 03:40:05.015240825 +0000 UTC m=+2634.526495197" observedRunningTime="2025-12-01 03:40:05.411098722 +0000 UTC m=+2634.922353104" watchObservedRunningTime="2025-12-01 03:40:05.414625119 +0000 UTC m=+2634.925879491" Dec 01 03:40:07 crc kubenswrapper[4880]: I1201 03:40:07.578396 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gx522" Dec 01 03:40:07 crc kubenswrapper[4880]: I1201 03:40:07.578898 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gx522" Dec 01 03:40:07 crc kubenswrapper[4880]: I1201 03:40:07.632784 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gx522" Dec 01 03:40:08 crc kubenswrapper[4880]: I1201 03:40:08.470669 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gx522" Dec 01 03:40:08 crc kubenswrapper[4880]: I1201 03:40:08.805550 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gx522"] Dec 01 03:40:09 crc kubenswrapper[4880]: I1201 03:40:09.970448 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:40:09 crc kubenswrapper[4880]: I1201 03:40:09.970710 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:40:10 crc kubenswrapper[4880]: I1201 03:40:10.020683 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:40:10 crc kubenswrapper[4880]: I1201 03:40:10.442179 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gx522" podUID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerName="registry-server" containerID="cri-o://603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d" gracePeriod=2 Dec 01 03:40:10 crc kubenswrapper[4880]: I1201 03:40:10.496849 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:40:10 crc kubenswrapper[4880]: I1201 03:40:10.925013 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx522" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.120921 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mk9c\" (UniqueName: \"kubernetes.io/projected/590f7cc3-04d2-465d-8686-1f38f4519c1e-kube-api-access-4mk9c\") pod \"590f7cc3-04d2-465d-8686-1f38f4519c1e\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.121103 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-utilities\") pod \"590f7cc3-04d2-465d-8686-1f38f4519c1e\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.121186 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-catalog-content\") pod \"590f7cc3-04d2-465d-8686-1f38f4519c1e\" (UID: \"590f7cc3-04d2-465d-8686-1f38f4519c1e\") " Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.122469 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-utilities" (OuterVolumeSpecName: "utilities") pod "590f7cc3-04d2-465d-8686-1f38f4519c1e" (UID: "590f7cc3-04d2-465d-8686-1f38f4519c1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.134796 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590f7cc3-04d2-465d-8686-1f38f4519c1e-kube-api-access-4mk9c" (OuterVolumeSpecName: "kube-api-access-4mk9c") pod "590f7cc3-04d2-465d-8686-1f38f4519c1e" (UID: "590f7cc3-04d2-465d-8686-1f38f4519c1e"). InnerVolumeSpecName "kube-api-access-4mk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.196371 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "590f7cc3-04d2-465d-8686-1f38f4519c1e" (UID: "590f7cc3-04d2-465d-8686-1f38f4519c1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.224181 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mk9c\" (UniqueName: \"kubernetes.io/projected/590f7cc3-04d2-465d-8686-1f38f4519c1e-kube-api-access-4mk9c\") on node \"crc\" DevicePath \"\"" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.224227 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.224241 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/590f7cc3-04d2-465d-8686-1f38f4519c1e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.452370 4880 generic.go:334] "Generic (PLEG): container finished" podID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerID="603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d" exitCode=0 Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.452420 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gx522" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.452441 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx522" event={"ID":"590f7cc3-04d2-465d-8686-1f38f4519c1e","Type":"ContainerDied","Data":"603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d"} Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.452652 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gx522" event={"ID":"590f7cc3-04d2-465d-8686-1f38f4519c1e","Type":"ContainerDied","Data":"627cd63ddc62fcf0499da87c74f29a6ea9c91e8eb0590bcc27830b9dcdf3b9b0"} Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.452668 4880 scope.go:117] "RemoveContainer" containerID="603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.472034 4880 scope.go:117] "RemoveContainer" containerID="4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.498796 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gx522"] Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.511771 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gx522"] Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.517857 4880 scope.go:117] "RemoveContainer" containerID="17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.545949 4880 scope.go:117] "RemoveContainer" containerID="603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d" Dec 01 03:40:11 crc kubenswrapper[4880]: E1201 03:40:11.546930 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d\": container with ID starting with 603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d not found: ID does not exist" containerID="603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.547007 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d"} err="failed to get container status \"603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d\": rpc error: code = NotFound desc = could not find container \"603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d\": container with ID starting with 603d710b2f2d39f419a2c80d1ea3bef8f3ba709c7a870d24e859a4427e889c9d not found: ID does not exist" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.547044 4880 scope.go:117] "RemoveContainer" containerID="4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2" Dec 01 03:40:11 crc kubenswrapper[4880]: E1201 03:40:11.547454 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2\": container with ID starting with 4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2 not found: ID does not exist" containerID="4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.547492 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2"} err="failed to get container status \"4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2\": rpc error: code = NotFound desc = could not find container \"4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2\": container with ID starting with 4c30d30be2bf405d92a5676dbeefbc6d88274981a0273cb0c381845c695869c2 not found: ID does not exist" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.547535 4880 scope.go:117] "RemoveContainer" containerID="17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467" Dec 01 03:40:11 crc kubenswrapper[4880]: E1201 03:40:11.548007 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467\": container with ID starting with 17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467 not found: ID does not exist" containerID="17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467" Dec 01 03:40:11 crc kubenswrapper[4880]: I1201 03:40:11.548046 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467"} err="failed to get container status \"17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467\": rpc error: code = NotFound desc = could not find container \"17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467\": container with ID starting with 17c9de81c2d7684224f04cbe05a15c5cb3eba06337a5720363506e9fa6b1e467 not found: ID does not exist" Dec 01 03:40:12 crc kubenswrapper[4880]: I1201 03:40:12.392316 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vd7l"] Dec 01 03:40:12 crc kubenswrapper[4880]: I1201 03:40:12.468066 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9vd7l" podUID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerName="registry-server" containerID="cri-o://13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be" gracePeriod=2 Dec 01 03:40:12 crc kubenswrapper[4880]: I1201 03:40:12.803290 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590f7cc3-04d2-465d-8686-1f38f4519c1e" path="/var/lib/kubelet/pods/590f7cc3-04d2-465d-8686-1f38f4519c1e/volumes" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.009533 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.181150 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-utilities\") pod \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.181247 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-catalog-content\") pod \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.181316 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr2wq\" (UniqueName: \"kubernetes.io/projected/33c9573d-cc8b-40a4-89f2-d1feda4b7162-kube-api-access-xr2wq\") pod \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\" (UID: \"33c9573d-cc8b-40a4-89f2-d1feda4b7162\") " Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.182149 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-utilities" (OuterVolumeSpecName: "utilities") pod "33c9573d-cc8b-40a4-89f2-d1feda4b7162" (UID: "33c9573d-cc8b-40a4-89f2-d1feda4b7162"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.186992 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c9573d-cc8b-40a4-89f2-d1feda4b7162-kube-api-access-xr2wq" (OuterVolumeSpecName: "kube-api-access-xr2wq") pod "33c9573d-cc8b-40a4-89f2-d1feda4b7162" (UID: "33c9573d-cc8b-40a4-89f2-d1feda4b7162"). InnerVolumeSpecName "kube-api-access-xr2wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.228113 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33c9573d-cc8b-40a4-89f2-d1feda4b7162" (UID: "33c9573d-cc8b-40a4-89f2-d1feda4b7162"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.284137 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.284165 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33c9573d-cc8b-40a4-89f2-d1feda4b7162-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.284176 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr2wq\" (UniqueName: \"kubernetes.io/projected/33c9573d-cc8b-40a4-89f2-d1feda4b7162-kube-api-access-xr2wq\") on node \"crc\" DevicePath \"\"" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.478736 4880 generic.go:334] "Generic (PLEG): container finished" podID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerID="13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be" exitCode=0 Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.478778 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vd7l" event={"ID":"33c9573d-cc8b-40a4-89f2-d1feda4b7162","Type":"ContainerDied","Data":"13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be"} Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.478802 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vd7l" event={"ID":"33c9573d-cc8b-40a4-89f2-d1feda4b7162","Type":"ContainerDied","Data":"699cf286ca880d07927d788bcc8f134d85e6336a5cbba4fab483325c35411998"} Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.478819 4880 scope.go:117] "RemoveContainer" containerID="13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.478937 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vd7l" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.499829 4880 scope.go:117] "RemoveContainer" containerID="097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.509594 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vd7l"] Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.527963 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9vd7l"] Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.528298 4880 scope.go:117] "RemoveContainer" containerID="264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.567584 4880 scope.go:117] "RemoveContainer" containerID="13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be" Dec 01 03:40:13 crc kubenswrapper[4880]: E1201 03:40:13.568105 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be\": container with ID starting with 13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be not found: ID does not exist" containerID="13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.568139 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be"} err="failed to get container status \"13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be\": rpc error: code = NotFound desc = could not find container \"13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be\": container with ID starting with 13c8c087074f86f2eafdea9b275011a7f7e03c6bce9da8ef35b6480bba8177be not found: ID does not exist" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.568159 4880 scope.go:117] "RemoveContainer" containerID="097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5" Dec 01 03:40:13 crc kubenswrapper[4880]: E1201 03:40:13.568581 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5\": container with ID starting with 097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5 not found: ID does not exist" containerID="097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.568641 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5"} err="failed to get container status \"097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5\": rpc error: code = NotFound desc = could not find container \"097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5\": container with ID starting with 097b722e81302ab8dfb247d70ae2a90d62bc5baec5ce8546ecd994a61e8562a5 not found: ID does not exist" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.568670 4880 scope.go:117] "RemoveContainer" containerID="264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e" Dec 01 03:40:13 crc kubenswrapper[4880]: E1201 03:40:13.569660 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e\": container with ID starting with 264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e not found: ID does not exist" containerID="264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e" Dec 01 03:40:13 crc kubenswrapper[4880]: I1201 03:40:13.569695 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e"} err="failed to get container status \"264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e\": rpc error: code = NotFound desc = could not find container \"264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e\": container with ID starting with 264725effaa33975f69909e510b0656d74132e6ccb219a8edf58944af57ac06e not found: ID does not exist" Dec 01 03:40:14 crc kubenswrapper[4880]: I1201 03:40:14.798933 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" path="/var/lib/kubelet/pods/33c9573d-cc8b-40a4-89f2-d1feda4b7162/volumes" Dec 01 03:40:47 crc kubenswrapper[4880]: I1201 03:40:47.369203 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:40:47 crc kubenswrapper[4880]: I1201 03:40:47.369914 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:41:17 crc kubenswrapper[4880]: I1201 03:41:17.369466 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:41:17 crc kubenswrapper[4880]: I1201 03:41:17.370047 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:41:47 crc kubenswrapper[4880]: I1201 03:41:47.369476 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:41:47 crc kubenswrapper[4880]: I1201 03:41:47.370083 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:41:47 crc kubenswrapper[4880]: I1201 03:41:47.370134 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:41:47 crc kubenswrapper[4880]: I1201 03:41:47.370906 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84d4567f8e44b9d8c3d5a522a1d30e6ab551d582fd74bdcfce5fce4ca3dcf52a"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:41:47 crc kubenswrapper[4880]: I1201 03:41:47.370974 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://84d4567f8e44b9d8c3d5a522a1d30e6ab551d582fd74bdcfce5fce4ca3dcf52a" gracePeriod=600 Dec 01 03:41:47 crc kubenswrapper[4880]: I1201 03:41:47.719985 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="84d4567f8e44b9d8c3d5a522a1d30e6ab551d582fd74bdcfce5fce4ca3dcf52a" exitCode=0 Dec 01 03:41:47 crc kubenswrapper[4880]: I1201 03:41:47.720054 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"84d4567f8e44b9d8c3d5a522a1d30e6ab551d582fd74bdcfce5fce4ca3dcf52a"} Dec 01 03:41:47 crc kubenswrapper[4880]: I1201 03:41:47.720286 4880 scope.go:117] "RemoveContainer" containerID="7de099dfde9fc60e21a252f2d6625d0b165f61ac8bf914af954007ed3b5f167b" Dec 01 03:41:48 crc kubenswrapper[4880]: I1201 03:41:48.732503 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122"} Dec 01 03:43:08 crc kubenswrapper[4880]: I1201 03:43:08.624089 4880 generic.go:334] "Generic (PLEG): container finished" podID="2e68604b-db33-4c1e-acac-bda832f95b3d" containerID="fe1464495f6cfb93efbc86d9020f16b7e3b2a8d83a572090906146932b022d98" exitCode=0 Dec 01 03:43:08 crc kubenswrapper[4880]: I1201 03:43:08.624194 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" event={"ID":"2e68604b-db33-4c1e-acac-bda832f95b3d","Type":"ContainerDied","Data":"fe1464495f6cfb93efbc86d9020f16b7e3b2a8d83a572090906146932b022d98"} Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.061364 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.142167 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-telemetry-combined-ca-bundle\") pod \"2e68604b-db33-4c1e-acac-bda832f95b3d\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.142305 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-inventory\") pod \"2e68604b-db33-4c1e-acac-bda832f95b3d\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.142469 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-1\") pod \"2e68604b-db33-4c1e-acac-bda832f95b3d\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.143415 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ssh-key\") pod \"2e68604b-db33-4c1e-acac-bda832f95b3d\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.143460 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-2\") pod \"2e68604b-db33-4c1e-acac-bda832f95b3d\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.143483 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7495\" (UniqueName: \"kubernetes.io/projected/2e68604b-db33-4c1e-acac-bda832f95b3d-kube-api-access-x7495\") pod \"2e68604b-db33-4c1e-acac-bda832f95b3d\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.143556 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-0\") pod \"2e68604b-db33-4c1e-acac-bda832f95b3d\" (UID: \"2e68604b-db33-4c1e-acac-bda832f95b3d\") " Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.157088 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e68604b-db33-4c1e-acac-bda832f95b3d-kube-api-access-x7495" (OuterVolumeSpecName: "kube-api-access-x7495") pod "2e68604b-db33-4c1e-acac-bda832f95b3d" (UID: "2e68604b-db33-4c1e-acac-bda832f95b3d"). InnerVolumeSpecName "kube-api-access-x7495". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.157716 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2e68604b-db33-4c1e-acac-bda832f95b3d" (UID: "2e68604b-db33-4c1e-acac-bda832f95b3d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.174459 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2e68604b-db33-4c1e-acac-bda832f95b3d" (UID: "2e68604b-db33-4c1e-acac-bda832f95b3d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.175823 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-inventory" (OuterVolumeSpecName: "inventory") pod "2e68604b-db33-4c1e-acac-bda832f95b3d" (UID: "2e68604b-db33-4c1e-acac-bda832f95b3d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.176225 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2e68604b-db33-4c1e-acac-bda832f95b3d" (UID: "2e68604b-db33-4c1e-acac-bda832f95b3d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.176852 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2e68604b-db33-4c1e-acac-bda832f95b3d" (UID: "2e68604b-db33-4c1e-acac-bda832f95b3d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.184090 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2e68604b-db33-4c1e-acac-bda832f95b3d" (UID: "2e68604b-db33-4c1e-acac-bda832f95b3d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.246298 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.246331 4880 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.246343 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7495\" (UniqueName: \"kubernetes.io/projected/2e68604b-db33-4c1e-acac-bda832f95b3d-kube-api-access-x7495\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.246354 4880 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.246365 4880 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.246376 4880 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.246385 4880 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2e68604b-db33-4c1e-acac-bda832f95b3d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.652400 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" event={"ID":"2e68604b-db33-4c1e-acac-bda832f95b3d","Type":"ContainerDied","Data":"b5c548b1d35df67e1a85d354d01f7d11a80d79eccceb27be89075febfbe53345"} Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.652445 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5c548b1d35df67e1a85d354d01f7d11a80d79eccceb27be89075febfbe53345" Dec 01 03:43:10 crc kubenswrapper[4880]: I1201 03:43:10.652488 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-d9rx4" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.393550 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kzkv2"] Dec 01 03:43:27 crc kubenswrapper[4880]: E1201 03:43:27.395046 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerName="extract-utilities" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395072 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerName="extract-utilities" Dec 01 03:43:27 crc kubenswrapper[4880]: E1201 03:43:27.395110 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e68604b-db33-4c1e-acac-bda832f95b3d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395125 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e68604b-db33-4c1e-acac-bda832f95b3d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 03:43:27 crc kubenswrapper[4880]: E1201 03:43:27.395146 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerName="registry-server" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395160 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerName="registry-server" Dec 01 03:43:27 crc kubenswrapper[4880]: E1201 03:43:27.395193 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerName="registry-server" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395206 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerName="registry-server" Dec 01 03:43:27 crc kubenswrapper[4880]: E1201 03:43:27.395246 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerName="extract-content" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395262 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerName="extract-content" Dec 01 03:43:27 crc kubenswrapper[4880]: E1201 03:43:27.395302 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerName="extract-content" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395316 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerName="extract-content" Dec 01 03:43:27 crc kubenswrapper[4880]: E1201 03:43:27.395344 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerName="extract-utilities" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395357 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerName="extract-utilities" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395731 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c9573d-cc8b-40a4-89f2-d1feda4b7162" containerName="registry-server" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395757 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e68604b-db33-4c1e-acac-bda832f95b3d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.395798 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="590f7cc3-04d2-465d-8686-1f38f4519c1e" containerName="registry-server" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.398399 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.419194 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kzkv2"] Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.464459 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-utilities\") pod \"redhat-operators-kzkv2\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.464515 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jtf\" (UniqueName: \"kubernetes.io/projected/ad986f80-5e90-471d-8b7a-aa647060209c-kube-api-access-r9jtf\") pod \"redhat-operators-kzkv2\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.464594 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-catalog-content\") pod \"redhat-operators-kzkv2\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.567010 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-utilities\") pod \"redhat-operators-kzkv2\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.567076 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9jtf\" (UniqueName: \"kubernetes.io/projected/ad986f80-5e90-471d-8b7a-aa647060209c-kube-api-access-r9jtf\") pod \"redhat-operators-kzkv2\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.567187 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-catalog-content\") pod \"redhat-operators-kzkv2\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.567575 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-utilities\") pod \"redhat-operators-kzkv2\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.567631 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-catalog-content\") pod \"redhat-operators-kzkv2\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.586079 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9jtf\" (UniqueName: \"kubernetes.io/projected/ad986f80-5e90-471d-8b7a-aa647060209c-kube-api-access-r9jtf\") pod \"redhat-operators-kzkv2\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:27 crc kubenswrapper[4880]: I1201 03:43:27.722398 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:28 crc kubenswrapper[4880]: I1201 03:43:28.181393 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kzkv2"] Dec 01 03:43:28 crc kubenswrapper[4880]: I1201 03:43:28.859783 4880 generic.go:334] "Generic (PLEG): container finished" podID="ad986f80-5e90-471d-8b7a-aa647060209c" containerID="2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f" exitCode=0 Dec 01 03:43:28 crc kubenswrapper[4880]: I1201 03:43:28.859841 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzkv2" event={"ID":"ad986f80-5e90-471d-8b7a-aa647060209c","Type":"ContainerDied","Data":"2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f"} Dec 01 03:43:28 crc kubenswrapper[4880]: I1201 03:43:28.862590 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzkv2" event={"ID":"ad986f80-5e90-471d-8b7a-aa647060209c","Type":"ContainerStarted","Data":"afd45be99e6403cb000dd250586e1fed54fb93fe96c227f7be0e07a53f0d9189"} Dec 01 03:43:28 crc kubenswrapper[4880]: I1201 03:43:28.862279 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 03:43:30 crc kubenswrapper[4880]: I1201 03:43:30.885823 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzkv2" event={"ID":"ad986f80-5e90-471d-8b7a-aa647060209c","Type":"ContainerStarted","Data":"cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759"} Dec 01 03:43:33 crc kubenswrapper[4880]: I1201 03:43:33.924054 4880 generic.go:334] "Generic (PLEG): container finished" podID="ad986f80-5e90-471d-8b7a-aa647060209c" containerID="cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759" exitCode=0 Dec 01 03:43:33 crc kubenswrapper[4880]: I1201 03:43:33.924450 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzkv2" event={"ID":"ad986f80-5e90-471d-8b7a-aa647060209c","Type":"ContainerDied","Data":"cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759"} Dec 01 03:43:34 crc kubenswrapper[4880]: I1201 03:43:34.938383 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzkv2" event={"ID":"ad986f80-5e90-471d-8b7a-aa647060209c","Type":"ContainerStarted","Data":"4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0"} Dec 01 03:43:34 crc kubenswrapper[4880]: I1201 03:43:34.965341 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kzkv2" podStartSLOduration=2.395241924 podStartE2EDuration="7.965317201s" podCreationTimestamp="2025-12-01 03:43:27 +0000 UTC" firstStartedPulling="2025-12-01 03:43:28.862084723 +0000 UTC m=+2838.373339095" lastFinishedPulling="2025-12-01 03:43:34.43215997 +0000 UTC m=+2843.943414372" observedRunningTime="2025-12-01 03:43:34.957603741 +0000 UTC m=+2844.468858113" watchObservedRunningTime="2025-12-01 03:43:34.965317201 +0000 UTC m=+2844.476571583" Dec 01 03:43:37 crc kubenswrapper[4880]: I1201 03:43:37.723426 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:37 crc kubenswrapper[4880]: I1201 03:43:37.723754 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:38 crc kubenswrapper[4880]: I1201 03:43:38.804376 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kzkv2" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" containerName="registry-server" probeResult="failure" output=< Dec 01 03:43:38 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 03:43:38 crc kubenswrapper[4880]: > Dec 01 03:43:47 crc kubenswrapper[4880]: I1201 03:43:47.369242 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:43:47 crc kubenswrapper[4880]: I1201 03:43:47.372000 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:43:47 crc kubenswrapper[4880]: I1201 03:43:47.816147 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:47 crc kubenswrapper[4880]: I1201 03:43:47.899893 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:48 crc kubenswrapper[4880]: I1201 03:43:48.075299 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kzkv2"] Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.089035 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kzkv2" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" containerName="registry-server" containerID="cri-o://4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0" gracePeriod=2 Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.655908 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.701625 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-catalog-content\") pod \"ad986f80-5e90-471d-8b7a-aa647060209c\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.701830 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9jtf\" (UniqueName: \"kubernetes.io/projected/ad986f80-5e90-471d-8b7a-aa647060209c-kube-api-access-r9jtf\") pod \"ad986f80-5e90-471d-8b7a-aa647060209c\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.701921 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-utilities\") pod \"ad986f80-5e90-471d-8b7a-aa647060209c\" (UID: \"ad986f80-5e90-471d-8b7a-aa647060209c\") " Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.703236 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-utilities" (OuterVolumeSpecName: "utilities") pod "ad986f80-5e90-471d-8b7a-aa647060209c" (UID: "ad986f80-5e90-471d-8b7a-aa647060209c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.708706 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad986f80-5e90-471d-8b7a-aa647060209c-kube-api-access-r9jtf" (OuterVolumeSpecName: "kube-api-access-r9jtf") pod "ad986f80-5e90-471d-8b7a-aa647060209c" (UID: "ad986f80-5e90-471d-8b7a-aa647060209c"). InnerVolumeSpecName "kube-api-access-r9jtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.807833 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.808104 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9jtf\" (UniqueName: \"kubernetes.io/projected/ad986f80-5e90-471d-8b7a-aa647060209c-kube-api-access-r9jtf\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.816002 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad986f80-5e90-471d-8b7a-aa647060209c" (UID: "ad986f80-5e90-471d-8b7a-aa647060209c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:43:49 crc kubenswrapper[4880]: I1201 03:43:49.910703 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad986f80-5e90-471d-8b7a-aa647060209c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.105036 4880 generic.go:334] "Generic (PLEG): container finished" podID="ad986f80-5e90-471d-8b7a-aa647060209c" containerID="4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0" exitCode=0 Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.105094 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzkv2" event={"ID":"ad986f80-5e90-471d-8b7a-aa647060209c","Type":"ContainerDied","Data":"4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0"} Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.106168 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kzkv2" event={"ID":"ad986f80-5e90-471d-8b7a-aa647060209c","Type":"ContainerDied","Data":"afd45be99e6403cb000dd250586e1fed54fb93fe96c227f7be0e07a53f0d9189"} Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.106209 4880 scope.go:117] "RemoveContainer" containerID="4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.105148 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kzkv2" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.149803 4880 scope.go:117] "RemoveContainer" containerID="cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.165070 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kzkv2"] Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.179300 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kzkv2"] Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.186369 4880 scope.go:117] "RemoveContainer" containerID="2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.235566 4880 scope.go:117] "RemoveContainer" containerID="4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0" Dec 01 03:43:50 crc kubenswrapper[4880]: E1201 03:43:50.236056 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0\": container with ID starting with 4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0 not found: ID does not exist" containerID="4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.236087 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0"} err="failed to get container status \"4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0\": rpc error: code = NotFound desc = could not find container \"4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0\": container with ID starting with 4cd491fab8b3164b22c7fb44e374a9fa34b070fd33e0ddd726f5f3ef0a487fa0 not found: ID does not exist" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.236111 4880 scope.go:117] "RemoveContainer" containerID="cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759" Dec 01 03:43:50 crc kubenswrapper[4880]: E1201 03:43:50.236335 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759\": container with ID starting with cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759 not found: ID does not exist" containerID="cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.236363 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759"} err="failed to get container status \"cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759\": rpc error: code = NotFound desc = could not find container \"cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759\": container with ID starting with cf89c28bc5aad2e982e0a46542f838be111db37ea6e868ce6b21e78a337aa759 not found: ID does not exist" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.236377 4880 scope.go:117] "RemoveContainer" containerID="2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f" Dec 01 03:43:50 crc kubenswrapper[4880]: E1201 03:43:50.236853 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f\": container with ID starting with 2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f not found: ID does not exist" containerID="2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.236929 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f"} err="failed to get container status \"2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f\": rpc error: code = NotFound desc = could not find container \"2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f\": container with ID starting with 2e0cdd6027b1fd59c3f94a0d286ee321e5e6cc70ef37856d7745b783d51d854f not found: ID does not exist" Dec 01 03:43:50 crc kubenswrapper[4880]: I1201 03:43:50.802522 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" path="/var/lib/kubelet/pods/ad986f80-5e90-471d-8b7a-aa647060209c/volumes" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.853487 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Dec 01 03:44:09 crc kubenswrapper[4880]: E1201 03:44:09.858134 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" containerName="registry-server" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.858170 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" containerName="registry-server" Dec 01 03:44:09 crc kubenswrapper[4880]: E1201 03:44:09.858204 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" containerName="extract-utilities" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.858214 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" containerName="extract-utilities" Dec 01 03:44:09 crc kubenswrapper[4880]: E1201 03:44:09.858234 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" containerName="extract-content" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.858242 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" containerName="extract-content" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.858494 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad986f80-5e90-471d-8b7a-aa647060209c" containerName="registry-server" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.865929 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.866086 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.870073 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-p26bl" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.870338 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.870544 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.871239 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.989162 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.989244 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.989293 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.989435 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.989553 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.989625 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.989711 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.989826 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:09 crc kubenswrapper[4880]: I1201 03:44:09.990096 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwzzh\" (UniqueName: \"kubernetes.io/projected/b0802478-b2a7-43fa-bcba-1e7a154e9572-kube-api-access-zwzzh\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.092456 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.092561 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.092598 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.092629 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.092650 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.092676 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.093048 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.093151 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.093843 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.094130 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwzzh\" (UniqueName: \"kubernetes.io/projected/b0802478-b2a7-43fa-bcba-1e7a154e9572-kube-api-access-zwzzh\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.094545 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.094941 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.095235 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.097167 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.101898 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.102106 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.105673 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.114169 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwzzh\" (UniqueName: \"kubernetes.io/projected/b0802478-b2a7-43fa-bcba-1e7a154e9572-kube-api-access-zwzzh\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.147717 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.197293 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 03:44:10 crc kubenswrapper[4880]: W1201 03:44:10.799060 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0802478_b2a7_43fa_bcba_1e7a154e9572.slice/crio-968e92da00b26f7d12637e0b6c5b8fbdbf94affdae08e0305bb6b89ace01ca29 WatchSource:0}: Error finding container 968e92da00b26f7d12637e0b6c5b8fbdbf94affdae08e0305bb6b89ace01ca29: Status 404 returned error can't find the container with id 968e92da00b26f7d12637e0b6c5b8fbdbf94affdae08e0305bb6b89ace01ca29 Dec 01 03:44:10 crc kubenswrapper[4880]: I1201 03:44:10.837659 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Dec 01 03:44:11 crc kubenswrapper[4880]: I1201 03:44:11.358484 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"b0802478-b2a7-43fa-bcba-1e7a154e9572","Type":"ContainerStarted","Data":"968e92da00b26f7d12637e0b6c5b8fbdbf94affdae08e0305bb6b89ace01ca29"} Dec 01 03:44:17 crc kubenswrapper[4880]: I1201 03:44:17.368862 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:44:17 crc kubenswrapper[4880]: I1201 03:44:17.369360 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:44:42 crc kubenswrapper[4880]: E1201 03:44:42.505029 4880 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-tempest-all:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:44:42 crc kubenswrapper[4880]: E1201 03:44:42.505529 4880 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-tempest-all:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 01 03:44:42 crc kubenswrapper[4880]: E1201 03:44:42.507297 4880 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-tempest-all:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwzzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(b0802478-b2a7-43fa-bcba-1e7a154e9572): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 03:44:42 crc kubenswrapper[4880]: E1201 03:44:42.508783 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="b0802478-b2a7-43fa-bcba-1e7a154e9572" Dec 01 03:44:42 crc kubenswrapper[4880]: E1201 03:44:42.680478 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-tempest-all:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="b0802478-b2a7-43fa-bcba-1e7a154e9572" Dec 01 03:44:47 crc kubenswrapper[4880]: I1201 03:44:47.369469 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:44:47 crc kubenswrapper[4880]: I1201 03:44:47.372914 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:44:47 crc kubenswrapper[4880]: I1201 03:44:47.373249 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:44:47 crc kubenswrapper[4880]: I1201 03:44:47.374709 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:44:47 crc kubenswrapper[4880]: I1201 03:44:47.375177 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" gracePeriod=600 Dec 01 03:44:47 crc kubenswrapper[4880]: E1201 03:44:47.528608 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:44:47 crc kubenswrapper[4880]: I1201 03:44:47.726223 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" exitCode=0 Dec 01 03:44:47 crc kubenswrapper[4880]: I1201 03:44:47.726289 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122"} Dec 01 03:44:47 crc kubenswrapper[4880]: I1201 03:44:47.726340 4880 scope.go:117] "RemoveContainer" containerID="84d4567f8e44b9d8c3d5a522a1d30e6ab551d582fd74bdcfce5fce4ca3dcf52a" Dec 01 03:44:47 crc kubenswrapper[4880]: I1201 03:44:47.727381 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:44:47 crc kubenswrapper[4880]: E1201 03:44:47.727912 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:44:54 crc kubenswrapper[4880]: I1201 03:44:54.892572 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 01 03:44:56 crc kubenswrapper[4880]: I1201 03:44:56.854948 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"b0802478-b2a7-43fa-bcba-1e7a154e9572","Type":"ContainerStarted","Data":"a9209d448d332d69de51dd47f83edd03dfeb93fa9d40c818bb333075dd953359"} Dec 01 03:44:56 crc kubenswrapper[4880]: I1201 03:44:56.885390 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=4.801402102 podStartE2EDuration="48.88537476s" podCreationTimestamp="2025-12-01 03:44:08 +0000 UTC" firstStartedPulling="2025-12-01 03:44:10.803485461 +0000 UTC m=+2880.314739843" lastFinishedPulling="2025-12-01 03:44:54.887458129 +0000 UTC m=+2924.398712501" observedRunningTime="2025-12-01 03:44:56.88330046 +0000 UTC m=+2926.394554852" watchObservedRunningTime="2025-12-01 03:44:56.88537476 +0000 UTC m=+2926.396629132" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.162080 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4"] Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.163764 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.165983 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.173255 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.188965 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4"] Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.338643 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8pfr\" (UniqueName: \"kubernetes.io/projected/0d231f53-ae4c-410a-a355-25f4dc70fda6-kube-api-access-z8pfr\") pod \"collect-profiles-29409345-pzfv4\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.338754 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d231f53-ae4c-410a-a355-25f4dc70fda6-secret-volume\") pod \"collect-profiles-29409345-pzfv4\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.338809 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d231f53-ae4c-410a-a355-25f4dc70fda6-config-volume\") pod \"collect-profiles-29409345-pzfv4\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.440433 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8pfr\" (UniqueName: \"kubernetes.io/projected/0d231f53-ae4c-410a-a355-25f4dc70fda6-kube-api-access-z8pfr\") pod \"collect-profiles-29409345-pzfv4\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.440561 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d231f53-ae4c-410a-a355-25f4dc70fda6-secret-volume\") pod \"collect-profiles-29409345-pzfv4\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.440635 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d231f53-ae4c-410a-a355-25f4dc70fda6-config-volume\") pod \"collect-profiles-29409345-pzfv4\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.441518 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d231f53-ae4c-410a-a355-25f4dc70fda6-config-volume\") pod \"collect-profiles-29409345-pzfv4\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.451101 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d231f53-ae4c-410a-a355-25f4dc70fda6-secret-volume\") pod \"collect-profiles-29409345-pzfv4\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.473607 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8pfr\" (UniqueName: \"kubernetes.io/projected/0d231f53-ae4c-410a-a355-25f4dc70fda6-kube-api-access-z8pfr\") pod \"collect-profiles-29409345-pzfv4\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.483733 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.814078 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:45:00 crc kubenswrapper[4880]: E1201 03:45:00.816036 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:45:00 crc kubenswrapper[4880]: I1201 03:45:00.975030 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4"] Dec 01 03:45:01 crc kubenswrapper[4880]: I1201 03:45:01.913547 4880 generic.go:334] "Generic (PLEG): container finished" podID="0d231f53-ae4c-410a-a355-25f4dc70fda6" containerID="67c0861ad1431184f7eb6884d9960ca38a4efc0c69d65f4a077683e6270cac2d" exitCode=0 Dec 01 03:45:01 crc kubenswrapper[4880]: I1201 03:45:01.913603 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" event={"ID":"0d231f53-ae4c-410a-a355-25f4dc70fda6","Type":"ContainerDied","Data":"67c0861ad1431184f7eb6884d9960ca38a4efc0c69d65f4a077683e6270cac2d"} Dec 01 03:45:01 crc kubenswrapper[4880]: I1201 03:45:01.913856 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" event={"ID":"0d231f53-ae4c-410a-a355-25f4dc70fda6","Type":"ContainerStarted","Data":"25f00e7c69ee6f2d27ea7a02d8990d9d8614399a9b57c90208f10997f04d420b"} Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.330205 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.504778 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d231f53-ae4c-410a-a355-25f4dc70fda6-secret-volume\") pod \"0d231f53-ae4c-410a-a355-25f4dc70fda6\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.505219 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8pfr\" (UniqueName: \"kubernetes.io/projected/0d231f53-ae4c-410a-a355-25f4dc70fda6-kube-api-access-z8pfr\") pod \"0d231f53-ae4c-410a-a355-25f4dc70fda6\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.505308 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d231f53-ae4c-410a-a355-25f4dc70fda6-config-volume\") pod \"0d231f53-ae4c-410a-a355-25f4dc70fda6\" (UID: \"0d231f53-ae4c-410a-a355-25f4dc70fda6\") " Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.506471 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d231f53-ae4c-410a-a355-25f4dc70fda6-config-volume" (OuterVolumeSpecName: "config-volume") pod "0d231f53-ae4c-410a-a355-25f4dc70fda6" (UID: "0d231f53-ae4c-410a-a355-25f4dc70fda6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.512026 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d231f53-ae4c-410a-a355-25f4dc70fda6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0d231f53-ae4c-410a-a355-25f4dc70fda6" (UID: "0d231f53-ae4c-410a-a355-25f4dc70fda6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.512050 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d231f53-ae4c-410a-a355-25f4dc70fda6-kube-api-access-z8pfr" (OuterVolumeSpecName: "kube-api-access-z8pfr") pod "0d231f53-ae4c-410a-a355-25f4dc70fda6" (UID: "0d231f53-ae4c-410a-a355-25f4dc70fda6"). InnerVolumeSpecName "kube-api-access-z8pfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.607979 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8pfr\" (UniqueName: \"kubernetes.io/projected/0d231f53-ae4c-410a-a355-25f4dc70fda6-kube-api-access-z8pfr\") on node \"crc\" DevicePath \"\"" Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.608013 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d231f53-ae4c-410a-a355-25f4dc70fda6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.608023 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d231f53-ae4c-410a-a355-25f4dc70fda6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.943536 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" event={"ID":"0d231f53-ae4c-410a-a355-25f4dc70fda6","Type":"ContainerDied","Data":"25f00e7c69ee6f2d27ea7a02d8990d9d8614399a9b57c90208f10997f04d420b"} Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.943577 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f00e7c69ee6f2d27ea7a02d8990d9d8614399a9b57c90208f10997f04d420b" Dec 01 03:45:03 crc kubenswrapper[4880]: I1201 03:45:03.943621 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4" Dec 01 03:45:04 crc kubenswrapper[4880]: I1201 03:45:04.436447 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm"] Dec 01 03:45:04 crc kubenswrapper[4880]: I1201 03:45:04.448939 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409300-4f7vm"] Dec 01 03:45:04 crc kubenswrapper[4880]: I1201 03:45:04.797753 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4902a7aa-8767-491f-87d4-d90e98d0e700" path="/var/lib/kubelet/pods/4902a7aa-8767-491f-87d4-d90e98d0e700/volumes" Dec 01 03:45:13 crc kubenswrapper[4880]: I1201 03:45:13.784475 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:45:13 crc kubenswrapper[4880]: E1201 03:45:13.785683 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.473115 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tfjg4"] Dec 01 03:45:27 crc kubenswrapper[4880]: E1201 03:45:27.474105 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d231f53-ae4c-410a-a355-25f4dc70fda6" containerName="collect-profiles" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.474118 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d231f53-ae4c-410a-a355-25f4dc70fda6" containerName="collect-profiles" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.474396 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d231f53-ae4c-410a-a355-25f4dc70fda6" containerName="collect-profiles" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.476016 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.491112 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfjg4"] Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.668392 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cvqv\" (UniqueName: \"kubernetes.io/projected/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-kube-api-access-9cvqv\") pod \"redhat-marketplace-tfjg4\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.668457 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-catalog-content\") pod \"redhat-marketplace-tfjg4\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.668967 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-utilities\") pod \"redhat-marketplace-tfjg4\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.769917 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-utilities\") pod \"redhat-marketplace-tfjg4\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.770060 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cvqv\" (UniqueName: \"kubernetes.io/projected/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-kube-api-access-9cvqv\") pod \"redhat-marketplace-tfjg4\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.770102 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-catalog-content\") pod \"redhat-marketplace-tfjg4\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.770361 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-utilities\") pod \"redhat-marketplace-tfjg4\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.770440 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-catalog-content\") pod \"redhat-marketplace-tfjg4\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.799705 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cvqv\" (UniqueName: \"kubernetes.io/projected/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-kube-api-access-9cvqv\") pod \"redhat-marketplace-tfjg4\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:27 crc kubenswrapper[4880]: I1201 03:45:27.807472 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:28 crc kubenswrapper[4880]: I1201 03:45:28.365710 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfjg4"] Dec 01 03:45:28 crc kubenswrapper[4880]: I1201 03:45:28.784721 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:45:28 crc kubenswrapper[4880]: E1201 03:45:28.785302 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:45:29 crc kubenswrapper[4880]: I1201 03:45:29.223061 4880 generic.go:334] "Generic (PLEG): container finished" podID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerID="8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171" exitCode=0 Dec 01 03:45:29 crc kubenswrapper[4880]: I1201 03:45:29.223347 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfjg4" event={"ID":"e3f8434f-7b04-4910-bd7d-da9095a1cd9d","Type":"ContainerDied","Data":"8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171"} Dec 01 03:45:29 crc kubenswrapper[4880]: I1201 03:45:29.223376 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfjg4" event={"ID":"e3f8434f-7b04-4910-bd7d-da9095a1cd9d","Type":"ContainerStarted","Data":"63b3a31fdb568b8f9dc0ee80ba5351a96ebbf3449d384658b56f6efd761dcbda"} Dec 01 03:45:31 crc kubenswrapper[4880]: I1201 03:45:31.241004 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfjg4" event={"ID":"e3f8434f-7b04-4910-bd7d-da9095a1cd9d","Type":"ContainerStarted","Data":"d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001"} Dec 01 03:45:32 crc kubenswrapper[4880]: I1201 03:45:32.267738 4880 generic.go:334] "Generic (PLEG): container finished" podID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerID="d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001" exitCode=0 Dec 01 03:45:32 crc kubenswrapper[4880]: I1201 03:45:32.267779 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfjg4" event={"ID":"e3f8434f-7b04-4910-bd7d-da9095a1cd9d","Type":"ContainerDied","Data":"d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001"} Dec 01 03:45:33 crc kubenswrapper[4880]: I1201 03:45:33.298766 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfjg4" event={"ID":"e3f8434f-7b04-4910-bd7d-da9095a1cd9d","Type":"ContainerStarted","Data":"98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43"} Dec 01 03:45:33 crc kubenswrapper[4880]: I1201 03:45:33.329621 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tfjg4" podStartSLOduration=2.725764344 podStartE2EDuration="6.329601395s" podCreationTimestamp="2025-12-01 03:45:27 +0000 UTC" firstStartedPulling="2025-12-01 03:45:29.22675007 +0000 UTC m=+2958.738004432" lastFinishedPulling="2025-12-01 03:45:32.830587091 +0000 UTC m=+2962.341841483" observedRunningTime="2025-12-01 03:45:33.321058608 +0000 UTC m=+2962.832312980" watchObservedRunningTime="2025-12-01 03:45:33.329601395 +0000 UTC m=+2962.840855767" Dec 01 03:45:37 crc kubenswrapper[4880]: I1201 03:45:37.807704 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:37 crc kubenswrapper[4880]: I1201 03:45:37.808334 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:37 crc kubenswrapper[4880]: I1201 03:45:37.886450 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:38 crc kubenswrapper[4880]: I1201 03:45:38.404304 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:38 crc kubenswrapper[4880]: I1201 03:45:38.452192 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfjg4"] Dec 01 03:45:40 crc kubenswrapper[4880]: I1201 03:45:40.360498 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tfjg4" podUID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerName="registry-server" containerID="cri-o://98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43" gracePeriod=2 Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.136186 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.285409 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-catalog-content\") pod \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.285531 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cvqv\" (UniqueName: \"kubernetes.io/projected/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-kube-api-access-9cvqv\") pod \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.285571 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-utilities\") pod \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\" (UID: \"e3f8434f-7b04-4910-bd7d-da9095a1cd9d\") " Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.286623 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-utilities" (OuterVolumeSpecName: "utilities") pod "e3f8434f-7b04-4910-bd7d-da9095a1cd9d" (UID: "e3f8434f-7b04-4910-bd7d-da9095a1cd9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.305315 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3f8434f-7b04-4910-bd7d-da9095a1cd9d" (UID: "e3f8434f-7b04-4910-bd7d-da9095a1cd9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.371250 4880 generic.go:334] "Generic (PLEG): container finished" podID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerID="98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43" exitCode=0 Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.371292 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfjg4" event={"ID":"e3f8434f-7b04-4910-bd7d-da9095a1cd9d","Type":"ContainerDied","Data":"98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43"} Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.371321 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfjg4" event={"ID":"e3f8434f-7b04-4910-bd7d-da9095a1cd9d","Type":"ContainerDied","Data":"63b3a31fdb568b8f9dc0ee80ba5351a96ebbf3449d384658b56f6efd761dcbda"} Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.371338 4880 scope.go:117] "RemoveContainer" containerID="98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.371472 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfjg4" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.375737 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-kube-api-access-9cvqv" (OuterVolumeSpecName: "kube-api-access-9cvqv") pod "e3f8434f-7b04-4910-bd7d-da9095a1cd9d" (UID: "e3f8434f-7b04-4910-bd7d-da9095a1cd9d"). InnerVolumeSpecName "kube-api-access-9cvqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.387070 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.387089 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cvqv\" (UniqueName: \"kubernetes.io/projected/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-kube-api-access-9cvqv\") on node \"crc\" DevicePath \"\"" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.387100 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3f8434f-7b04-4910-bd7d-da9095a1cd9d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.429065 4880 scope.go:117] "RemoveContainer" containerID="d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.455569 4880 scope.go:117] "RemoveContainer" containerID="8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.496163 4880 scope.go:117] "RemoveContainer" containerID="98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43" Dec 01 03:45:41 crc kubenswrapper[4880]: E1201 03:45:41.496528 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43\": container with ID starting with 98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43 not found: ID does not exist" containerID="98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.496659 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43"} err="failed to get container status \"98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43\": rpc error: code = NotFound desc = could not find container \"98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43\": container with ID starting with 98a5b90a45e617f412fd29b677dd986a23fbf0b482a1fd510b88ed48b672db43 not found: ID does not exist" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.496766 4880 scope.go:117] "RemoveContainer" containerID="d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001" Dec 01 03:45:41 crc kubenswrapper[4880]: E1201 03:45:41.497892 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001\": container with ID starting with d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001 not found: ID does not exist" containerID="d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.497929 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001"} err="failed to get container status \"d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001\": rpc error: code = NotFound desc = could not find container \"d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001\": container with ID starting with d29dfb2f8a33bc80e7fae35ff566602932b7c57d36b4befb9a42730bab127001 not found: ID does not exist" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.497955 4880 scope.go:117] "RemoveContainer" containerID="8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171" Dec 01 03:45:41 crc kubenswrapper[4880]: E1201 03:45:41.498224 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171\": container with ID starting with 8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171 not found: ID does not exist" containerID="8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.498326 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171"} err="failed to get container status \"8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171\": rpc error: code = NotFound desc = could not find container \"8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171\": container with ID starting with 8af38b96e8be02adc77f1501249d33042b9f563431e61634d44328a410034171 not found: ID does not exist" Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.701801 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfjg4"] Dec 01 03:45:41 crc kubenswrapper[4880]: I1201 03:45:41.709911 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfjg4"] Dec 01 03:45:42 crc kubenswrapper[4880]: I1201 03:45:42.454631 4880 scope.go:117] "RemoveContainer" containerID="775476b0fff2e75fdd10e62f6b8a48e63ad35fedcc7305a83eeccf9b67414469" Dec 01 03:45:42 crc kubenswrapper[4880]: I1201 03:45:42.783961 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:45:42 crc kubenswrapper[4880]: E1201 03:45:42.784251 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:45:42 crc kubenswrapper[4880]: I1201 03:45:42.796052 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" path="/var/lib/kubelet/pods/e3f8434f-7b04-4910-bd7d-da9095a1cd9d/volumes" Dec 01 03:45:56 crc kubenswrapper[4880]: I1201 03:45:56.784743 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:45:56 crc kubenswrapper[4880]: E1201 03:45:56.785527 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:46:09 crc kubenswrapper[4880]: I1201 03:46:09.784814 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:46:09 crc kubenswrapper[4880]: E1201 03:46:09.785540 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:46:20 crc kubenswrapper[4880]: I1201 03:46:20.797032 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:46:20 crc kubenswrapper[4880]: E1201 03:46:20.800470 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:46:34 crc kubenswrapper[4880]: I1201 03:46:34.784429 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:46:34 crc kubenswrapper[4880]: E1201 03:46:34.785273 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:46:45 crc kubenswrapper[4880]: I1201 03:46:45.786088 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:46:45 crc kubenswrapper[4880]: E1201 03:46:45.786755 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:46:59 crc kubenswrapper[4880]: I1201 03:46:59.783910 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:46:59 crc kubenswrapper[4880]: E1201 03:46:59.820986 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:47:12 crc kubenswrapper[4880]: I1201 03:47:12.783881 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:47:12 crc kubenswrapper[4880]: E1201 03:47:12.784598 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:47:25 crc kubenswrapper[4880]: I1201 03:47:25.783999 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:47:25 crc kubenswrapper[4880]: E1201 03:47:25.784609 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:47:40 crc kubenswrapper[4880]: I1201 03:47:40.792262 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:47:40 crc kubenswrapper[4880]: E1201 03:47:40.793293 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:47:51 crc kubenswrapper[4880]: I1201 03:47:51.784391 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:47:51 crc kubenswrapper[4880]: E1201 03:47:51.786237 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:48:05 crc kubenswrapper[4880]: I1201 03:48:05.785081 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:48:05 crc kubenswrapper[4880]: E1201 03:48:05.785676 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:48:16 crc kubenswrapper[4880]: I1201 03:48:16.785236 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:48:16 crc kubenswrapper[4880]: E1201 03:48:16.786231 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:48:27 crc kubenswrapper[4880]: I1201 03:48:27.784150 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:48:27 crc kubenswrapper[4880]: E1201 03:48:27.785203 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:48:39 crc kubenswrapper[4880]: I1201 03:48:39.785145 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:48:39 crc kubenswrapper[4880]: E1201 03:48:39.786003 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:48:54 crc kubenswrapper[4880]: I1201 03:48:54.784513 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:48:54 crc kubenswrapper[4880]: E1201 03:48:54.785542 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:49:09 crc kubenswrapper[4880]: I1201 03:49:09.784586 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:49:09 crc kubenswrapper[4880]: E1201 03:49:09.785320 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:49:23 crc kubenswrapper[4880]: I1201 03:49:23.784376 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:49:23 crc kubenswrapper[4880]: E1201 03:49:23.785232 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:49:37 crc kubenswrapper[4880]: I1201 03:49:37.784554 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:49:37 crc kubenswrapper[4880]: E1201 03:49:37.785484 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:49:48 crc kubenswrapper[4880]: I1201 03:49:48.783704 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:49:49 crc kubenswrapper[4880]: I1201 03:49:49.783250 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"24c006a70c18c354b78a3484bd8777892d6cd7df639952a9adf16ebd2293a24b"} Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.106417 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w4pbr"] Dec 01 03:50:50 crc kubenswrapper[4880]: E1201 03:50:50.112959 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerName="extract-utilities" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.112995 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerName="extract-utilities" Dec 01 03:50:50 crc kubenswrapper[4880]: E1201 03:50:50.113010 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerName="extract-content" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.113016 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerName="extract-content" Dec 01 03:50:50 crc kubenswrapper[4880]: E1201 03:50:50.113047 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerName="registry-server" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.113055 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerName="registry-server" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.113454 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f8434f-7b04-4910-bd7d-da9095a1cd9d" containerName="registry-server" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.115029 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.127632 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4pbr"] Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.224835 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-catalog-content\") pod \"certified-operators-w4pbr\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.225413 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-utilities\") pod \"certified-operators-w4pbr\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.225544 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ns5g\" (UniqueName: \"kubernetes.io/projected/1321d8c0-c85a-4a90-938d-29f717d36d4b-kube-api-access-4ns5g\") pod \"certified-operators-w4pbr\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.328209 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-catalog-content\") pod \"certified-operators-w4pbr\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.328782 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-utilities\") pod \"certified-operators-w4pbr\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.328843 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ns5g\" (UniqueName: \"kubernetes.io/projected/1321d8c0-c85a-4a90-938d-29f717d36d4b-kube-api-access-4ns5g\") pod \"certified-operators-w4pbr\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.329098 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-catalog-content\") pod \"certified-operators-w4pbr\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.329710 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-utilities\") pod \"certified-operators-w4pbr\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.352807 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ns5g\" (UniqueName: \"kubernetes.io/projected/1321d8c0-c85a-4a90-938d-29f717d36d4b-kube-api-access-4ns5g\") pod \"certified-operators-w4pbr\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:50 crc kubenswrapper[4880]: I1201 03:50:50.437477 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:50:51 crc kubenswrapper[4880]: I1201 03:50:51.150441 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4pbr"] Dec 01 03:50:51 crc kubenswrapper[4880]: I1201 03:50:51.366749 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4pbr" event={"ID":"1321d8c0-c85a-4a90-938d-29f717d36d4b","Type":"ContainerStarted","Data":"9e9bb956a0bb9f28a0aa12c82d1e2f8b601e2cb9bd6e788f3d3d1831bee6ff0a"} Dec 01 03:50:52 crc kubenswrapper[4880]: I1201 03:50:52.378060 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4pbr" event={"ID":"1321d8c0-c85a-4a90-938d-29f717d36d4b","Type":"ContainerDied","Data":"768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6"} Dec 01 03:50:52 crc kubenswrapper[4880]: I1201 03:50:52.378214 4880 generic.go:334] "Generic (PLEG): container finished" podID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerID="768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6" exitCode=0 Dec 01 03:50:52 crc kubenswrapper[4880]: I1201 03:50:52.382015 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 03:50:54 crc kubenswrapper[4880]: I1201 03:50:54.398158 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4pbr" event={"ID":"1321d8c0-c85a-4a90-938d-29f717d36d4b","Type":"ContainerStarted","Data":"7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5"} Dec 01 03:50:55 crc kubenswrapper[4880]: I1201 03:50:55.406678 4880 generic.go:334] "Generic (PLEG): container finished" podID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerID="7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5" exitCode=0 Dec 01 03:50:55 crc kubenswrapper[4880]: I1201 03:50:55.406942 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4pbr" event={"ID":"1321d8c0-c85a-4a90-938d-29f717d36d4b","Type":"ContainerDied","Data":"7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5"} Dec 01 03:50:56 crc kubenswrapper[4880]: I1201 03:50:56.417759 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4pbr" event={"ID":"1321d8c0-c85a-4a90-938d-29f717d36d4b","Type":"ContainerStarted","Data":"634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02"} Dec 01 03:50:56 crc kubenswrapper[4880]: I1201 03:50:56.442263 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w4pbr" podStartSLOduration=2.899576607 podStartE2EDuration="6.441963918s" podCreationTimestamp="2025-12-01 03:50:50 +0000 UTC" firstStartedPulling="2025-12-01 03:50:52.380594729 +0000 UTC m=+3281.891849101" lastFinishedPulling="2025-12-01 03:50:55.92298203 +0000 UTC m=+3285.434236412" observedRunningTime="2025-12-01 03:50:56.431205778 +0000 UTC m=+3285.942460150" watchObservedRunningTime="2025-12-01 03:50:56.441963918 +0000 UTC m=+3285.953218280" Dec 01 03:51:00 crc kubenswrapper[4880]: I1201 03:51:00.438000 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:51:00 crc kubenswrapper[4880]: I1201 03:51:00.438545 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:51:01 crc kubenswrapper[4880]: I1201 03:51:01.482967 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w4pbr" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerName="registry-server" probeResult="failure" output=< Dec 01 03:51:01 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 03:51:01 crc kubenswrapper[4880]: > Dec 01 03:51:10 crc kubenswrapper[4880]: I1201 03:51:10.506079 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:51:10 crc kubenswrapper[4880]: I1201 03:51:10.566844 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:51:10 crc kubenswrapper[4880]: I1201 03:51:10.761295 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4pbr"] Dec 01 03:51:11 crc kubenswrapper[4880]: I1201 03:51:11.534043 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w4pbr" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerName="registry-server" containerID="cri-o://634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02" gracePeriod=2 Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.330075 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.359996 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-catalog-content\") pod \"1321d8c0-c85a-4a90-938d-29f717d36d4b\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.360159 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-utilities\") pod \"1321d8c0-c85a-4a90-938d-29f717d36d4b\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.360187 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ns5g\" (UniqueName: \"kubernetes.io/projected/1321d8c0-c85a-4a90-938d-29f717d36d4b-kube-api-access-4ns5g\") pod \"1321d8c0-c85a-4a90-938d-29f717d36d4b\" (UID: \"1321d8c0-c85a-4a90-938d-29f717d36d4b\") " Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.365410 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-utilities" (OuterVolumeSpecName: "utilities") pod "1321d8c0-c85a-4a90-938d-29f717d36d4b" (UID: "1321d8c0-c85a-4a90-938d-29f717d36d4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.390210 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1321d8c0-c85a-4a90-938d-29f717d36d4b-kube-api-access-4ns5g" (OuterVolumeSpecName: "kube-api-access-4ns5g") pod "1321d8c0-c85a-4a90-938d-29f717d36d4b" (UID: "1321d8c0-c85a-4a90-938d-29f717d36d4b"). InnerVolumeSpecName "kube-api-access-4ns5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.442569 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1321d8c0-c85a-4a90-938d-29f717d36d4b" (UID: "1321d8c0-c85a-4a90-938d-29f717d36d4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.462671 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.462704 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1321d8c0-c85a-4a90-938d-29f717d36d4b-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.462714 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ns5g\" (UniqueName: \"kubernetes.io/projected/1321d8c0-c85a-4a90-938d-29f717d36d4b-kube-api-access-4ns5g\") on node \"crc\" DevicePath \"\"" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.542718 4880 generic.go:334] "Generic (PLEG): container finished" podID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerID="634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02" exitCode=0 Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.542766 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4pbr" event={"ID":"1321d8c0-c85a-4a90-938d-29f717d36d4b","Type":"ContainerDied","Data":"634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02"} Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.542793 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4pbr" event={"ID":"1321d8c0-c85a-4a90-938d-29f717d36d4b","Type":"ContainerDied","Data":"9e9bb956a0bb9f28a0aa12c82d1e2f8b601e2cb9bd6e788f3d3d1831bee6ff0a"} Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.543035 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4pbr" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.548424 4880 scope.go:117] "RemoveContainer" containerID="634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.586975 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4pbr"] Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.587935 4880 scope.go:117] "RemoveContainer" containerID="7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.598365 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w4pbr"] Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.605608 4880 scope.go:117] "RemoveContainer" containerID="768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.660766 4880 scope.go:117] "RemoveContainer" containerID="634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02" Dec 01 03:51:12 crc kubenswrapper[4880]: E1201 03:51:12.663366 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02\": container with ID starting with 634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02 not found: ID does not exist" containerID="634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.663566 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02"} err="failed to get container status \"634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02\": rpc error: code = NotFound desc = could not find container \"634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02\": container with ID starting with 634a35583c66ffa5e8c438289d0b5701f308cddfb3c1450f3def0cbec1c75a02 not found: ID does not exist" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.663599 4880 scope.go:117] "RemoveContainer" containerID="7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5" Dec 01 03:51:12 crc kubenswrapper[4880]: E1201 03:51:12.663931 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5\": container with ID starting with 7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5 not found: ID does not exist" containerID="7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.663964 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5"} err="failed to get container status \"7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5\": rpc error: code = NotFound desc = could not find container \"7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5\": container with ID starting with 7f7d68aee5cb59b7393109f4af2b08723fd6540469447127029fa224c05618b5 not found: ID does not exist" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.663989 4880 scope.go:117] "RemoveContainer" containerID="768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6" Dec 01 03:51:12 crc kubenswrapper[4880]: E1201 03:51:12.664245 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6\": container with ID starting with 768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6 not found: ID does not exist" containerID="768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.664262 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6"} err="failed to get container status \"768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6\": rpc error: code = NotFound desc = could not find container \"768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6\": container with ID starting with 768344ab1da2c4ece0b2bd5320c68663753c8d48dc94791e164608b16c8203c6 not found: ID does not exist" Dec 01 03:51:12 crc kubenswrapper[4880]: I1201 03:51:12.793803 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" path="/var/lib/kubelet/pods/1321d8c0-c85a-4a90-938d-29f717d36d4b/volumes" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.610350 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jgz9d"] Dec 01 03:51:16 crc kubenswrapper[4880]: E1201 03:51:16.613426 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerName="extract-content" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.613453 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerName="extract-content" Dec 01 03:51:16 crc kubenswrapper[4880]: E1201 03:51:16.613500 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerName="extract-utilities" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.613508 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerName="extract-utilities" Dec 01 03:51:16 crc kubenswrapper[4880]: E1201 03:51:16.613542 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerName="registry-server" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.613548 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerName="registry-server" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.613988 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="1321d8c0-c85a-4a90-938d-29f717d36d4b" containerName="registry-server" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.624386 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.680650 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgz9d"] Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.737808 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-utilities\") pod \"community-operators-jgz9d\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.737936 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2smgg\" (UniqueName: \"kubernetes.io/projected/955201e0-1abc-4a41-9d8a-e273e2b0137e-kube-api-access-2smgg\") pod \"community-operators-jgz9d\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.738336 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-catalog-content\") pod \"community-operators-jgz9d\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.839857 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-catalog-content\") pod \"community-operators-jgz9d\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.840034 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-utilities\") pod \"community-operators-jgz9d\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.840431 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-catalog-content\") pod \"community-operators-jgz9d\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.840793 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-utilities\") pod \"community-operators-jgz9d\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.842122 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2smgg\" (UniqueName: \"kubernetes.io/projected/955201e0-1abc-4a41-9d8a-e273e2b0137e-kube-api-access-2smgg\") pod \"community-operators-jgz9d\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.871171 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2smgg\" (UniqueName: \"kubernetes.io/projected/955201e0-1abc-4a41-9d8a-e273e2b0137e-kube-api-access-2smgg\") pod \"community-operators-jgz9d\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:16 crc kubenswrapper[4880]: I1201 03:51:16.978063 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:17 crc kubenswrapper[4880]: I1201 03:51:17.588303 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgz9d"] Dec 01 03:51:18 crc kubenswrapper[4880]: I1201 03:51:18.595304 4880 generic.go:334] "Generic (PLEG): container finished" podID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerID="dbdf192d2168fec2e26221a6b39a1414c3dbd96cad57221150cb69f02c9731d2" exitCode=0 Dec 01 03:51:18 crc kubenswrapper[4880]: I1201 03:51:18.595363 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz9d" event={"ID":"955201e0-1abc-4a41-9d8a-e273e2b0137e","Type":"ContainerDied","Data":"dbdf192d2168fec2e26221a6b39a1414c3dbd96cad57221150cb69f02c9731d2"} Dec 01 03:51:18 crc kubenswrapper[4880]: I1201 03:51:18.595705 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz9d" event={"ID":"955201e0-1abc-4a41-9d8a-e273e2b0137e","Type":"ContainerStarted","Data":"5faa4a8e9a519a4a94754ccd2e7d00fa1bf522e64bdd9331283d80bad4e7cf20"} Dec 01 03:51:20 crc kubenswrapper[4880]: I1201 03:51:20.656085 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz9d" event={"ID":"955201e0-1abc-4a41-9d8a-e273e2b0137e","Type":"ContainerStarted","Data":"9b8c30b687e99f825a9927d6efeaedb4bfd1dd0dce13694c97cbdc38838b4775"} Dec 01 03:51:21 crc kubenswrapper[4880]: I1201 03:51:21.672576 4880 generic.go:334] "Generic (PLEG): container finished" podID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerID="9b8c30b687e99f825a9927d6efeaedb4bfd1dd0dce13694c97cbdc38838b4775" exitCode=0 Dec 01 03:51:21 crc kubenswrapper[4880]: I1201 03:51:21.673056 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz9d" event={"ID":"955201e0-1abc-4a41-9d8a-e273e2b0137e","Type":"ContainerDied","Data":"9b8c30b687e99f825a9927d6efeaedb4bfd1dd0dce13694c97cbdc38838b4775"} Dec 01 03:51:22 crc kubenswrapper[4880]: I1201 03:51:22.687480 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz9d" event={"ID":"955201e0-1abc-4a41-9d8a-e273e2b0137e","Type":"ContainerStarted","Data":"a126a8c6235322676159a25db031b2345e7de8ac34db0334806b97cf7ebc87f0"} Dec 01 03:51:22 crc kubenswrapper[4880]: I1201 03:51:22.716089 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jgz9d" podStartSLOduration=3.189802865 podStartE2EDuration="6.712735466s" podCreationTimestamp="2025-12-01 03:51:16 +0000 UTC" firstStartedPulling="2025-12-01 03:51:18.597344297 +0000 UTC m=+3308.108598669" lastFinishedPulling="2025-12-01 03:51:22.120276898 +0000 UTC m=+3311.631531270" observedRunningTime="2025-12-01 03:51:22.704457726 +0000 UTC m=+3312.215712098" watchObservedRunningTime="2025-12-01 03:51:22.712735466 +0000 UTC m=+3312.223989838" Dec 01 03:51:26 crc kubenswrapper[4880]: I1201 03:51:26.978769 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:26 crc kubenswrapper[4880]: I1201 03:51:26.980006 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:28 crc kubenswrapper[4880]: I1201 03:51:28.024919 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jgz9d" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerName="registry-server" probeResult="failure" output=< Dec 01 03:51:28 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 03:51:28 crc kubenswrapper[4880]: > Dec 01 03:51:37 crc kubenswrapper[4880]: I1201 03:51:37.038029 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:37 crc kubenswrapper[4880]: I1201 03:51:37.100949 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:37 crc kubenswrapper[4880]: I1201 03:51:37.286609 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgz9d"] Dec 01 03:51:38 crc kubenswrapper[4880]: I1201 03:51:38.830842 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jgz9d" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerName="registry-server" containerID="cri-o://a126a8c6235322676159a25db031b2345e7de8ac34db0334806b97cf7ebc87f0" gracePeriod=2 Dec 01 03:51:39 crc kubenswrapper[4880]: I1201 03:51:39.865420 4880 generic.go:334] "Generic (PLEG): container finished" podID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerID="a126a8c6235322676159a25db031b2345e7de8ac34db0334806b97cf7ebc87f0" exitCode=0 Dec 01 03:51:39 crc kubenswrapper[4880]: I1201 03:51:39.865973 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz9d" event={"ID":"955201e0-1abc-4a41-9d8a-e273e2b0137e","Type":"ContainerDied","Data":"a126a8c6235322676159a25db031b2345e7de8ac34db0334806b97cf7ebc87f0"} Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.029653 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.128130 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-utilities\") pod \"955201e0-1abc-4a41-9d8a-e273e2b0137e\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.128538 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2smgg\" (UniqueName: \"kubernetes.io/projected/955201e0-1abc-4a41-9d8a-e273e2b0137e-kube-api-access-2smgg\") pod \"955201e0-1abc-4a41-9d8a-e273e2b0137e\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.128629 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-catalog-content\") pod \"955201e0-1abc-4a41-9d8a-e273e2b0137e\" (UID: \"955201e0-1abc-4a41-9d8a-e273e2b0137e\") " Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.130828 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-utilities" (OuterVolumeSpecName: "utilities") pod "955201e0-1abc-4a41-9d8a-e273e2b0137e" (UID: "955201e0-1abc-4a41-9d8a-e273e2b0137e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.167590 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955201e0-1abc-4a41-9d8a-e273e2b0137e-kube-api-access-2smgg" (OuterVolumeSpecName: "kube-api-access-2smgg") pod "955201e0-1abc-4a41-9d8a-e273e2b0137e" (UID: "955201e0-1abc-4a41-9d8a-e273e2b0137e"). InnerVolumeSpecName "kube-api-access-2smgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.195054 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "955201e0-1abc-4a41-9d8a-e273e2b0137e" (UID: "955201e0-1abc-4a41-9d8a-e273e2b0137e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.231495 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.231527 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2smgg\" (UniqueName: \"kubernetes.io/projected/955201e0-1abc-4a41-9d8a-e273e2b0137e-kube-api-access-2smgg\") on node \"crc\" DevicePath \"\"" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.231539 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/955201e0-1abc-4a41-9d8a-e273e2b0137e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.875788 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgz9d" event={"ID":"955201e0-1abc-4a41-9d8a-e273e2b0137e","Type":"ContainerDied","Data":"5faa4a8e9a519a4a94754ccd2e7d00fa1bf522e64bdd9331283d80bad4e7cf20"} Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.875994 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgz9d" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.876651 4880 scope.go:117] "RemoveContainer" containerID="a126a8c6235322676159a25db031b2345e7de8ac34db0334806b97cf7ebc87f0" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.917000 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgz9d"] Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.928800 4880 scope.go:117] "RemoveContainer" containerID="9b8c30b687e99f825a9927d6efeaedb4bfd1dd0dce13694c97cbdc38838b4775" Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.937740 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jgz9d"] Dec 01 03:51:40 crc kubenswrapper[4880]: I1201 03:51:40.993462 4880 scope.go:117] "RemoveContainer" containerID="dbdf192d2168fec2e26221a6b39a1414c3dbd96cad57221150cb69f02c9731d2" Dec 01 03:51:42 crc kubenswrapper[4880]: I1201 03:51:42.792996 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" path="/var/lib/kubelet/pods/955201e0-1abc-4a41-9d8a-e273e2b0137e/volumes" Dec 01 03:52:17 crc kubenswrapper[4880]: I1201 03:52:17.369889 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:52:17 crc kubenswrapper[4880]: I1201 03:52:17.370776 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:52:47 crc kubenswrapper[4880]: I1201 03:52:47.369046 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:52:47 crc kubenswrapper[4880]: I1201 03:52:47.369808 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:53:17 crc kubenswrapper[4880]: I1201 03:53:17.368924 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:53:17 crc kubenswrapper[4880]: I1201 03:53:17.369545 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:53:17 crc kubenswrapper[4880]: I1201 03:53:17.369932 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:53:17 crc kubenswrapper[4880]: I1201 03:53:17.371150 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24c006a70c18c354b78a3484bd8777892d6cd7df639952a9adf16ebd2293a24b"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:53:17 crc kubenswrapper[4880]: I1201 03:53:17.371637 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://24c006a70c18c354b78a3484bd8777892d6cd7df639952a9adf16ebd2293a24b" gracePeriod=600 Dec 01 03:53:17 crc kubenswrapper[4880]: I1201 03:53:17.816126 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="24c006a70c18c354b78a3484bd8777892d6cd7df639952a9adf16ebd2293a24b" exitCode=0 Dec 01 03:53:17 crc kubenswrapper[4880]: I1201 03:53:17.816217 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"24c006a70c18c354b78a3484bd8777892d6cd7df639952a9adf16ebd2293a24b"} Dec 01 03:53:17 crc kubenswrapper[4880]: I1201 03:53:17.816762 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca"} Dec 01 03:53:17 crc kubenswrapper[4880]: I1201 03:53:17.817533 4880 scope.go:117] "RemoveContainer" containerID="ab84ad27d9da638d7fb12bc4a06c7148a031a3d2b2a155e6a66a1a89a5eb2122" Dec 01 03:54:29 crc kubenswrapper[4880]: E1201 03:54:29.704326 4880 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.39:35698->38.102.83.39:42095: read tcp 38.102.83.39:35698->38.102.83.39:42095: read: connection reset by peer Dec 01 03:55:17 crc kubenswrapper[4880]: I1201 03:55:17.368827 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:55:17 crc kubenswrapper[4880]: I1201 03:55:17.370452 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:55:47 crc kubenswrapper[4880]: I1201 03:55:47.370240 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:55:47 crc kubenswrapper[4880]: I1201 03:55:47.371361 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:56:17 crc kubenswrapper[4880]: I1201 03:56:17.369423 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 03:56:17 crc kubenswrapper[4880]: I1201 03:56:17.370121 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 03:56:17 crc kubenswrapper[4880]: I1201 03:56:17.370197 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 03:56:17 crc kubenswrapper[4880]: I1201 03:56:17.371509 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 03:56:17 crc kubenswrapper[4880]: I1201 03:56:17.371598 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" gracePeriod=600 Dec 01 03:56:17 crc kubenswrapper[4880]: E1201 03:56:17.497727 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:56:17 crc kubenswrapper[4880]: I1201 03:56:17.605383 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" exitCode=0 Dec 01 03:56:17 crc kubenswrapper[4880]: I1201 03:56:17.605428 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca"} Dec 01 03:56:17 crc kubenswrapper[4880]: I1201 03:56:17.605471 4880 scope.go:117] "RemoveContainer" containerID="24c006a70c18c354b78a3484bd8777892d6cd7df639952a9adf16ebd2293a24b" Dec 01 03:56:17 crc kubenswrapper[4880]: I1201 03:56:17.606206 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:56:17 crc kubenswrapper[4880]: E1201 03:56:17.606931 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:56:27 crc kubenswrapper[4880]: I1201 03:56:27.785578 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:56:27 crc kubenswrapper[4880]: E1201 03:56:27.786644 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:56:39 crc kubenswrapper[4880]: I1201 03:56:39.803534 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:56:39 crc kubenswrapper[4880]: E1201 03:56:39.804356 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:56:53 crc kubenswrapper[4880]: I1201 03:56:53.784582 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:56:53 crc kubenswrapper[4880]: E1201 03:56:53.785264 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:57:07 crc kubenswrapper[4880]: I1201 03:57:07.784425 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:57:07 crc kubenswrapper[4880]: E1201 03:57:07.785171 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:57:21 crc kubenswrapper[4880]: I1201 03:57:21.784057 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:57:21 crc kubenswrapper[4880]: E1201 03:57:21.784757 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:57:34 crc kubenswrapper[4880]: I1201 03:57:34.784465 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:57:34 crc kubenswrapper[4880]: E1201 03:57:34.785311 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:57:46 crc kubenswrapper[4880]: I1201 03:57:46.784688 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:57:46 crc kubenswrapper[4880]: E1201 03:57:46.785343 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:58:01 crc kubenswrapper[4880]: I1201 03:58:01.783857 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:58:01 crc kubenswrapper[4880]: E1201 03:58:01.784810 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:58:13 crc kubenswrapper[4880]: I1201 03:58:13.784455 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:58:13 crc kubenswrapper[4880]: E1201 03:58:13.785666 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:58:28 crc kubenswrapper[4880]: I1201 03:58:28.784136 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:58:28 crc kubenswrapper[4880]: E1201 03:58:28.785342 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:58:40 crc kubenswrapper[4880]: I1201 03:58:40.789768 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:58:40 crc kubenswrapper[4880]: E1201 03:58:40.790410 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:58:53 crc kubenswrapper[4880]: I1201 03:58:53.784286 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:58:53 crc kubenswrapper[4880]: E1201 03:58:53.786579 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:59:07 crc kubenswrapper[4880]: I1201 03:59:07.784446 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:59:07 crc kubenswrapper[4880]: E1201 03:59:07.785209 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:59:19 crc kubenswrapper[4880]: I1201 03:59:19.784323 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:59:19 crc kubenswrapper[4880]: E1201 03:59:19.786192 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:59:31 crc kubenswrapper[4880]: I1201 03:59:31.784313 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:59:31 crc kubenswrapper[4880]: E1201 03:59:31.784933 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:59:42 crc kubenswrapper[4880]: I1201 03:59:42.784136 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:59:42 crc kubenswrapper[4880]: E1201 03:59:42.784958 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 03:59:54 crc kubenswrapper[4880]: I1201 03:59:54.786271 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 03:59:54 crc kubenswrapper[4880]: E1201 03:59:54.786967 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.336766 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6"] Dec 01 04:00:00 crc kubenswrapper[4880]: E1201 04:00:00.339938 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerName="extract-content" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.340146 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerName="extract-content" Dec 01 04:00:00 crc kubenswrapper[4880]: E1201 04:00:00.340266 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerName="extract-utilities" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.340328 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerName="extract-utilities" Dec 01 04:00:00 crc kubenswrapper[4880]: E1201 04:00:00.340399 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerName="registry-server" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.340467 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerName="registry-server" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.341142 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="955201e0-1abc-4a41-9d8a-e273e2b0137e" containerName="registry-server" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.342794 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.346141 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.401138 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.473620 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6"] Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.519088 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbkx\" (UniqueName: \"kubernetes.io/projected/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-kube-api-access-scbkx\") pod \"collect-profiles-29409360-hhwm6\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.519470 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-secret-volume\") pod \"collect-profiles-29409360-hhwm6\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.519578 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-config-volume\") pod \"collect-profiles-29409360-hhwm6\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.621316 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-secret-volume\") pod \"collect-profiles-29409360-hhwm6\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.621369 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-config-volume\") pod \"collect-profiles-29409360-hhwm6\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.621426 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scbkx\" (UniqueName: \"kubernetes.io/projected/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-kube-api-access-scbkx\") pod \"collect-profiles-29409360-hhwm6\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.624570 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-config-volume\") pod \"collect-profiles-29409360-hhwm6\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.643100 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scbkx\" (UniqueName: \"kubernetes.io/projected/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-kube-api-access-scbkx\") pod \"collect-profiles-29409360-hhwm6\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.644102 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-secret-volume\") pod \"collect-profiles-29409360-hhwm6\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:00 crc kubenswrapper[4880]: I1201 04:00:00.680523 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:01 crc kubenswrapper[4880]: I1201 04:00:01.310097 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6"] Dec 01 04:00:01 crc kubenswrapper[4880]: I1201 04:00:01.850567 4880 generic.go:334] "Generic (PLEG): container finished" podID="f53e41ca-9a2b-446a-8d21-87b68bcbe84b" containerID="de0676f11686e2ecbb741cfbbe7e19e9f9276193a47446533c3149b7d57c5ca9" exitCode=0 Dec 01 04:00:01 crc kubenswrapper[4880]: I1201 04:00:01.850777 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" event={"ID":"f53e41ca-9a2b-446a-8d21-87b68bcbe84b","Type":"ContainerDied","Data":"de0676f11686e2ecbb741cfbbe7e19e9f9276193a47446533c3149b7d57c5ca9"} Dec 01 04:00:01 crc kubenswrapper[4880]: I1201 04:00:01.850802 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" event={"ID":"f53e41ca-9a2b-446a-8d21-87b68bcbe84b","Type":"ContainerStarted","Data":"55bf207c080e11f55c928042192735088ec4b65108e76d8c16430e54376ac3bf"} Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.458545 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wzr6f"] Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.460676 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.478060 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzr6f"] Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.560527 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8zb\" (UniqueName: \"kubernetes.io/projected/420271b0-3c53-4a5a-8611-639481161846-kube-api-access-kw8zb\") pod \"redhat-marketplace-wzr6f\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.560707 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-catalog-content\") pod \"redhat-marketplace-wzr6f\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.560729 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-utilities\") pod \"redhat-marketplace-wzr6f\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.663172 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8zb\" (UniqueName: \"kubernetes.io/projected/420271b0-3c53-4a5a-8611-639481161846-kube-api-access-kw8zb\") pod \"redhat-marketplace-wzr6f\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.663402 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-catalog-content\") pod \"redhat-marketplace-wzr6f\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.663438 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-utilities\") pod \"redhat-marketplace-wzr6f\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.664370 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-catalog-content\") pod \"redhat-marketplace-wzr6f\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.664615 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-utilities\") pod \"redhat-marketplace-wzr6f\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.687720 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8zb\" (UniqueName: \"kubernetes.io/projected/420271b0-3c53-4a5a-8611-639481161846-kube-api-access-kw8zb\") pod \"redhat-marketplace-wzr6f\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:02 crc kubenswrapper[4880]: I1201 04:00:02.782725 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.063075 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4tz82"] Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.065342 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.076097 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4tz82"] Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.172271 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp2m5\" (UniqueName: \"kubernetes.io/projected/c204c239-5654-4a92-a800-3896b2c66452-kube-api-access-qp2m5\") pod \"redhat-operators-4tz82\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.172550 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-catalog-content\") pod \"redhat-operators-4tz82\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.172584 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-utilities\") pod \"redhat-operators-4tz82\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.274446 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp2m5\" (UniqueName: \"kubernetes.io/projected/c204c239-5654-4a92-a800-3896b2c66452-kube-api-access-qp2m5\") pod \"redhat-operators-4tz82\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.274766 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-catalog-content\") pod \"redhat-operators-4tz82\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.274801 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-utilities\") pod \"redhat-operators-4tz82\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.275821 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-catalog-content\") pod \"redhat-operators-4tz82\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.276352 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-utilities\") pod \"redhat-operators-4tz82\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.304149 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp2m5\" (UniqueName: \"kubernetes.io/projected/c204c239-5654-4a92-a800-3896b2c66452-kube-api-access-qp2m5\") pod \"redhat-operators-4tz82\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.394962 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.407207 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.455066 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzr6f"] Dec 01 04:00:03 crc kubenswrapper[4880]: W1201 04:00:03.459579 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod420271b0_3c53_4a5a_8611_639481161846.slice/crio-bbbd4796f536212879d2c3765a8a6cf019cbc008e0ff39d1d5f40ba4c24a8c54 WatchSource:0}: Error finding container bbbd4796f536212879d2c3765a8a6cf019cbc008e0ff39d1d5f40ba4c24a8c54: Status 404 returned error can't find the container with id bbbd4796f536212879d2c3765a8a6cf019cbc008e0ff39d1d5f40ba4c24a8c54 Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.477392 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-config-volume\") pod \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.477516 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-secret-volume\") pod \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.477634 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scbkx\" (UniqueName: \"kubernetes.io/projected/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-kube-api-access-scbkx\") pod \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\" (UID: \"f53e41ca-9a2b-446a-8d21-87b68bcbe84b\") " Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.480004 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-config-volume" (OuterVolumeSpecName: "config-volume") pod "f53e41ca-9a2b-446a-8d21-87b68bcbe84b" (UID: "f53e41ca-9a2b-446a-8d21-87b68bcbe84b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.485270 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-kube-api-access-scbkx" (OuterVolumeSpecName: "kube-api-access-scbkx") pod "f53e41ca-9a2b-446a-8d21-87b68bcbe84b" (UID: "f53e41ca-9a2b-446a-8d21-87b68bcbe84b"). InnerVolumeSpecName "kube-api-access-scbkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.487216 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f53e41ca-9a2b-446a-8d21-87b68bcbe84b" (UID: "f53e41ca-9a2b-446a-8d21-87b68bcbe84b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.579412 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.579741 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scbkx\" (UniqueName: \"kubernetes.io/projected/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-kube-api-access-scbkx\") on node \"crc\" DevicePath \"\"" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.579750 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53e41ca-9a2b-446a-8d21-87b68bcbe84b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.868602 4880 generic.go:334] "Generic (PLEG): container finished" podID="420271b0-3c53-4a5a-8611-639481161846" containerID="af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5" exitCode=0 Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.868767 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzr6f" event={"ID":"420271b0-3c53-4a5a-8611-639481161846","Type":"ContainerDied","Data":"af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5"} Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.869774 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzr6f" event={"ID":"420271b0-3c53-4a5a-8611-639481161846","Type":"ContainerStarted","Data":"bbbd4796f536212879d2c3765a8a6cf019cbc008e0ff39d1d5f40ba4c24a8c54"} Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.873113 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" event={"ID":"f53e41ca-9a2b-446a-8d21-87b68bcbe84b","Type":"ContainerDied","Data":"55bf207c080e11f55c928042192735088ec4b65108e76d8c16430e54376ac3bf"} Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.873177 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.873389 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55bf207c080e11f55c928042192735088ec4b65108e76d8c16430e54376ac3bf" Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.873491 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 04:00:03 crc kubenswrapper[4880]: I1201 04:00:03.901749 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4tz82"] Dec 01 04:00:04 crc kubenswrapper[4880]: I1201 04:00:04.498481 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7"] Dec 01 04:00:04 crc kubenswrapper[4880]: I1201 04:00:04.506242 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409315-74cz7"] Dec 01 04:00:04 crc kubenswrapper[4880]: I1201 04:00:04.795998 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f32e230-504f-40c2-8d2d-3add5e3a46d8" path="/var/lib/kubelet/pods/2f32e230-504f-40c2-8d2d-3add5e3a46d8/volumes" Dec 01 04:00:04 crc kubenswrapper[4880]: I1201 04:00:04.882276 4880 generic.go:334] "Generic (PLEG): container finished" podID="c204c239-5654-4a92-a800-3896b2c66452" containerID="89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0" exitCode=0 Dec 01 04:00:04 crc kubenswrapper[4880]: I1201 04:00:04.882334 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tz82" event={"ID":"c204c239-5654-4a92-a800-3896b2c66452","Type":"ContainerDied","Data":"89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0"} Dec 01 04:00:04 crc kubenswrapper[4880]: I1201 04:00:04.882363 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tz82" event={"ID":"c204c239-5654-4a92-a800-3896b2c66452","Type":"ContainerStarted","Data":"83413e09ad0a37d49e7595363f4a6b8913e82cd34d3a7f7f21780188cf1e3492"} Dec 01 04:00:05 crc kubenswrapper[4880]: I1201 04:00:05.892902 4880 generic.go:334] "Generic (PLEG): container finished" podID="420271b0-3c53-4a5a-8611-639481161846" containerID="0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948" exitCode=0 Dec 01 04:00:05 crc kubenswrapper[4880]: I1201 04:00:05.893092 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzr6f" event={"ID":"420271b0-3c53-4a5a-8611-639481161846","Type":"ContainerDied","Data":"0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948"} Dec 01 04:00:06 crc kubenswrapper[4880]: I1201 04:00:06.785225 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 04:00:06 crc kubenswrapper[4880]: E1201 04:00:06.786181 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:00:06 crc kubenswrapper[4880]: I1201 04:00:06.908519 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzr6f" event={"ID":"420271b0-3c53-4a5a-8611-639481161846","Type":"ContainerStarted","Data":"230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1"} Dec 01 04:00:06 crc kubenswrapper[4880]: I1201 04:00:06.920012 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tz82" event={"ID":"c204c239-5654-4a92-a800-3896b2c66452","Type":"ContainerStarted","Data":"c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44"} Dec 01 04:00:06 crc kubenswrapper[4880]: I1201 04:00:06.931914 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wzr6f" podStartSLOduration=2.144492542 podStartE2EDuration="4.931125173s" podCreationTimestamp="2025-12-01 04:00:02 +0000 UTC" firstStartedPulling="2025-12-01 04:00:03.871110569 +0000 UTC m=+3833.382364941" lastFinishedPulling="2025-12-01 04:00:06.6577432 +0000 UTC m=+3836.168997572" observedRunningTime="2025-12-01 04:00:06.930420936 +0000 UTC m=+3836.441675328" watchObservedRunningTime="2025-12-01 04:00:06.931125173 +0000 UTC m=+3836.442379545" Dec 01 04:00:09 crc kubenswrapper[4880]: I1201 04:00:09.947220 4880 generic.go:334] "Generic (PLEG): container finished" podID="c204c239-5654-4a92-a800-3896b2c66452" containerID="c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44" exitCode=0 Dec 01 04:00:09 crc kubenswrapper[4880]: I1201 04:00:09.947310 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tz82" event={"ID":"c204c239-5654-4a92-a800-3896b2c66452","Type":"ContainerDied","Data":"c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44"} Dec 01 04:00:11 crc kubenswrapper[4880]: I1201 04:00:11.970444 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tz82" event={"ID":"c204c239-5654-4a92-a800-3896b2c66452","Type":"ContainerStarted","Data":"7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7"} Dec 01 04:00:11 crc kubenswrapper[4880]: I1201 04:00:11.992098 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4tz82" podStartSLOduration=2.606257152 podStartE2EDuration="8.992080154s" podCreationTimestamp="2025-12-01 04:00:03 +0000 UTC" firstStartedPulling="2025-12-01 04:00:04.884101872 +0000 UTC m=+3834.395356244" lastFinishedPulling="2025-12-01 04:00:11.269924874 +0000 UTC m=+3840.781179246" observedRunningTime="2025-12-01 04:00:11.988511418 +0000 UTC m=+3841.499765790" watchObservedRunningTime="2025-12-01 04:00:11.992080154 +0000 UTC m=+3841.503334526" Dec 01 04:00:12 crc kubenswrapper[4880]: I1201 04:00:12.804039 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:12 crc kubenswrapper[4880]: I1201 04:00:12.804084 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:12 crc kubenswrapper[4880]: I1201 04:00:12.836476 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:13 crc kubenswrapper[4880]: I1201 04:00:13.040715 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:13 crc kubenswrapper[4880]: I1201 04:00:13.395282 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:13 crc kubenswrapper[4880]: I1201 04:00:13.395320 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:14 crc kubenswrapper[4880]: I1201 04:00:14.454491 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4tz82" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="registry-server" probeResult="failure" output=< Dec 01 04:00:14 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:00:14 crc kubenswrapper[4880]: > Dec 01 04:00:14 crc kubenswrapper[4880]: I1201 04:00:14.846305 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzr6f"] Dec 01 04:00:14 crc kubenswrapper[4880]: I1201 04:00:14.994365 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wzr6f" podUID="420271b0-3c53-4a5a-8611-639481161846" containerName="registry-server" containerID="cri-o://230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1" gracePeriod=2 Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.599227 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.758839 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw8zb\" (UniqueName: \"kubernetes.io/projected/420271b0-3c53-4a5a-8611-639481161846-kube-api-access-kw8zb\") pod \"420271b0-3c53-4a5a-8611-639481161846\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.759602 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-catalog-content\") pod \"420271b0-3c53-4a5a-8611-639481161846\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.759698 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-utilities\") pod \"420271b0-3c53-4a5a-8611-639481161846\" (UID: \"420271b0-3c53-4a5a-8611-639481161846\") " Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.760229 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-utilities" (OuterVolumeSpecName: "utilities") pod "420271b0-3c53-4a5a-8611-639481161846" (UID: "420271b0-3c53-4a5a-8611-639481161846"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.761228 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.769822 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420271b0-3c53-4a5a-8611-639481161846-kube-api-access-kw8zb" (OuterVolumeSpecName: "kube-api-access-kw8zb") pod "420271b0-3c53-4a5a-8611-639481161846" (UID: "420271b0-3c53-4a5a-8611-639481161846"). InnerVolumeSpecName "kube-api-access-kw8zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.804433 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "420271b0-3c53-4a5a-8611-639481161846" (UID: "420271b0-3c53-4a5a-8611-639481161846"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.863168 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw8zb\" (UniqueName: \"kubernetes.io/projected/420271b0-3c53-4a5a-8611-639481161846-kube-api-access-kw8zb\") on node \"crc\" DevicePath \"\"" Dec 01 04:00:15 crc kubenswrapper[4880]: I1201 04:00:15.863210 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420271b0-3c53-4a5a-8611-639481161846-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.008087 4880 generic.go:334] "Generic (PLEG): container finished" podID="420271b0-3c53-4a5a-8611-639481161846" containerID="230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1" exitCode=0 Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.008129 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzr6f" event={"ID":"420271b0-3c53-4a5a-8611-639481161846","Type":"ContainerDied","Data":"230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1"} Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.008138 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wzr6f" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.008158 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wzr6f" event={"ID":"420271b0-3c53-4a5a-8611-639481161846","Type":"ContainerDied","Data":"bbbd4796f536212879d2c3765a8a6cf019cbc008e0ff39d1d5f40ba4c24a8c54"} Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.008177 4880 scope.go:117] "RemoveContainer" containerID="230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.043740 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzr6f"] Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.046814 4880 scope.go:117] "RemoveContainer" containerID="0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.055497 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wzr6f"] Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.071225 4880 scope.go:117] "RemoveContainer" containerID="af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.107451 4880 scope.go:117] "RemoveContainer" containerID="230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1" Dec 01 04:00:16 crc kubenswrapper[4880]: E1201 04:00:16.108485 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1\": container with ID starting with 230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1 not found: ID does not exist" containerID="230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.108535 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1"} err="failed to get container status \"230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1\": rpc error: code = NotFound desc = could not find container \"230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1\": container with ID starting with 230160a55080f37236f4ec73d3376184933324b3962b8dac67c241c73fea21e1 not found: ID does not exist" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.108557 4880 scope.go:117] "RemoveContainer" containerID="0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948" Dec 01 04:00:16 crc kubenswrapper[4880]: E1201 04:00:16.108973 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948\": container with ID starting with 0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948 not found: ID does not exist" containerID="0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.109016 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948"} err="failed to get container status \"0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948\": rpc error: code = NotFound desc = could not find container \"0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948\": container with ID starting with 0e8fa2350810e811768bcf2bf7e0e9485714e4c1f438f863e1f031082429a948 not found: ID does not exist" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.109039 4880 scope.go:117] "RemoveContainer" containerID="af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5" Dec 01 04:00:16 crc kubenswrapper[4880]: E1201 04:00:16.109341 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5\": container with ID starting with af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5 not found: ID does not exist" containerID="af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.109478 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5"} err="failed to get container status \"af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5\": rpc error: code = NotFound desc = could not find container \"af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5\": container with ID starting with af285060161f5291668299330311bea623441b85658e873bc94c65e613f2f3f5 not found: ID does not exist" Dec 01 04:00:16 crc kubenswrapper[4880]: I1201 04:00:16.794834 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420271b0-3c53-4a5a-8611-639481161846" path="/var/lib/kubelet/pods/420271b0-3c53-4a5a-8611-639481161846/volumes" Dec 01 04:00:20 crc kubenswrapper[4880]: I1201 04:00:20.805175 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 04:00:20 crc kubenswrapper[4880]: E1201 04:00:20.807069 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:00:24 crc kubenswrapper[4880]: I1201 04:00:24.463811 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4tz82" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="registry-server" probeResult="failure" output=< Dec 01 04:00:24 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:00:24 crc kubenswrapper[4880]: > Dec 01 04:00:33 crc kubenswrapper[4880]: I1201 04:00:33.454224 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:33 crc kubenswrapper[4880]: I1201 04:00:33.500999 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:33 crc kubenswrapper[4880]: I1201 04:00:33.785510 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 04:00:33 crc kubenswrapper[4880]: E1201 04:00:33.785756 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.251133 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4tz82"] Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.253452 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4tz82" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="registry-server" containerID="cri-o://7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7" gracePeriod=2 Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.804309 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.871696 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp2m5\" (UniqueName: \"kubernetes.io/projected/c204c239-5654-4a92-a800-3896b2c66452-kube-api-access-qp2m5\") pod \"c204c239-5654-4a92-a800-3896b2c66452\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.871821 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-catalog-content\") pod \"c204c239-5654-4a92-a800-3896b2c66452\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.871856 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-utilities\") pod \"c204c239-5654-4a92-a800-3896b2c66452\" (UID: \"c204c239-5654-4a92-a800-3896b2c66452\") " Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.873198 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-utilities" (OuterVolumeSpecName: "utilities") pod "c204c239-5654-4a92-a800-3896b2c66452" (UID: "c204c239-5654-4a92-a800-3896b2c66452"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.890894 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c204c239-5654-4a92-a800-3896b2c66452-kube-api-access-qp2m5" (OuterVolumeSpecName: "kube-api-access-qp2m5") pod "c204c239-5654-4a92-a800-3896b2c66452" (UID: "c204c239-5654-4a92-a800-3896b2c66452"). InnerVolumeSpecName "kube-api-access-qp2m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.975088 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.975118 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp2m5\" (UniqueName: \"kubernetes.io/projected/c204c239-5654-4a92-a800-3896b2c66452-kube-api-access-qp2m5\") on node \"crc\" DevicePath \"\"" Dec 01 04:00:36 crc kubenswrapper[4880]: I1201 04:00:36.991327 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c204c239-5654-4a92-a800-3896b2c66452" (UID: "c204c239-5654-4a92-a800-3896b2c66452"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.076704 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c204c239-5654-4a92-a800-3896b2c66452-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.264585 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tz82" event={"ID":"c204c239-5654-4a92-a800-3896b2c66452","Type":"ContainerDied","Data":"7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7"} Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.264647 4880 scope.go:117] "RemoveContainer" containerID="7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.264731 4880 generic.go:334] "Generic (PLEG): container finished" podID="c204c239-5654-4a92-a800-3896b2c66452" containerID="7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7" exitCode=0 Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.264793 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tz82" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.264806 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tz82" event={"ID":"c204c239-5654-4a92-a800-3896b2c66452","Type":"ContainerDied","Data":"83413e09ad0a37d49e7595363f4a6b8913e82cd34d3a7f7f21780188cf1e3492"} Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.309100 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4tz82"] Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.309343 4880 scope.go:117] "RemoveContainer" containerID="c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.317877 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4tz82"] Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.344518 4880 scope.go:117] "RemoveContainer" containerID="89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.400792 4880 scope.go:117] "RemoveContainer" containerID="7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7" Dec 01 04:00:37 crc kubenswrapper[4880]: E1201 04:00:37.401246 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7\": container with ID starting with 7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7 not found: ID does not exist" containerID="7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.401356 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7"} err="failed to get container status \"7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7\": rpc error: code = NotFound desc = could not find container \"7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7\": container with ID starting with 7e653b00b33c0f37470256246dbbc736a05f72e654c573bf8ecbe553c5069ea7 not found: ID does not exist" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.401435 4880 scope.go:117] "RemoveContainer" containerID="c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44" Dec 01 04:00:37 crc kubenswrapper[4880]: E1201 04:00:37.401851 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44\": container with ID starting with c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44 not found: ID does not exist" containerID="c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.401940 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44"} err="failed to get container status \"c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44\": rpc error: code = NotFound desc = could not find container \"c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44\": container with ID starting with c2679a0aae2c92e90368d287ce1f6e7599c0517998e1d4f325de4d2648285d44 not found: ID does not exist" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.401965 4880 scope.go:117] "RemoveContainer" containerID="89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0" Dec 01 04:00:37 crc kubenswrapper[4880]: E1201 04:00:37.402277 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0\": container with ID starting with 89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0 not found: ID does not exist" containerID="89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0" Dec 01 04:00:37 crc kubenswrapper[4880]: I1201 04:00:37.402348 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0"} err="failed to get container status \"89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0\": rpc error: code = NotFound desc = could not find container \"89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0\": container with ID starting with 89154a276cd1889f198fec1fc83895c9cad1e7c1c2879df86c15af97d5fae3e0 not found: ID does not exist" Dec 01 04:00:38 crc kubenswrapper[4880]: I1201 04:00:38.807774 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c204c239-5654-4a92-a800-3896b2c66452" path="/var/lib/kubelet/pods/c204c239-5654-4a92-a800-3896b2c66452/volumes" Dec 01 04:00:42 crc kubenswrapper[4880]: I1201 04:00:42.932380 4880 scope.go:117] "RemoveContainer" containerID="6794b302f110ab047fee2417ed9708a89bd11e2eb2757428a0aeb11078e886a3" Dec 01 04:00:45 crc kubenswrapper[4880]: I1201 04:00:45.784026 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 04:00:45 crc kubenswrapper[4880]: E1201 04:00:45.784787 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.329749 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29409361-52tp6"] Dec 01 04:01:00 crc kubenswrapper[4880]: E1201 04:01:00.344467 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420271b0-3c53-4a5a-8611-639481161846" containerName="extract-utilities" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.344509 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="420271b0-3c53-4a5a-8611-639481161846" containerName="extract-utilities" Dec 01 04:01:00 crc kubenswrapper[4880]: E1201 04:01:00.344544 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420271b0-3c53-4a5a-8611-639481161846" containerName="registry-server" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.344552 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="420271b0-3c53-4a5a-8611-639481161846" containerName="registry-server" Dec 01 04:01:00 crc kubenswrapper[4880]: E1201 04:01:00.344566 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53e41ca-9a2b-446a-8d21-87b68bcbe84b" containerName="collect-profiles" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.344575 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53e41ca-9a2b-446a-8d21-87b68bcbe84b" containerName="collect-profiles" Dec 01 04:01:00 crc kubenswrapper[4880]: E1201 04:01:00.344588 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="extract-content" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.344595 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="extract-content" Dec 01 04:01:00 crc kubenswrapper[4880]: E1201 04:01:00.344610 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="registry-server" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.344616 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="registry-server" Dec 01 04:01:00 crc kubenswrapper[4880]: E1201 04:01:00.344636 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420271b0-3c53-4a5a-8611-639481161846" containerName="extract-content" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.344642 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="420271b0-3c53-4a5a-8611-639481161846" containerName="extract-content" Dec 01 04:01:00 crc kubenswrapper[4880]: E1201 04:01:00.344660 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="extract-utilities" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.344668 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="extract-utilities" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.345207 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53e41ca-9a2b-446a-8d21-87b68bcbe84b" containerName="collect-profiles" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.345236 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="c204c239-5654-4a92-a800-3896b2c66452" containerName="registry-server" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.345285 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="420271b0-3c53-4a5a-8611-639481161846" containerName="registry-server" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.347235 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.446750 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29409361-52tp6"] Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.447179 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-combined-ca-bundle\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.447256 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-fernet-keys\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.447302 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-config-data\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.447327 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqqt\" (UniqueName: \"kubernetes.io/projected/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-kube-api-access-6bqqt\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.549247 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-config-data\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.549315 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqqt\" (UniqueName: \"kubernetes.io/projected/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-kube-api-access-6bqqt\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.549432 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-combined-ca-bundle\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.549590 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-fernet-keys\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.561763 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-config-data\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.563008 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-fernet-keys\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.564043 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-combined-ca-bundle\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.571523 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqqt\" (UniqueName: \"kubernetes.io/projected/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-kube-api-access-6bqqt\") pod \"keystone-cron-29409361-52tp6\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.673914 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:00 crc kubenswrapper[4880]: I1201 04:01:00.790740 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 04:01:00 crc kubenswrapper[4880]: E1201 04:01:00.791074 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:01:01 crc kubenswrapper[4880]: I1201 04:01:01.394217 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29409361-52tp6"] Dec 01 04:01:01 crc kubenswrapper[4880]: I1201 04:01:01.516443 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409361-52tp6" event={"ID":"8679ecb1-41cb-4610-89f0-7047b4f2c7ff","Type":"ContainerStarted","Data":"f47cdf861bb41949479304f5cf2539be399ab991c8762ff686c5d980f94f0b78"} Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.114611 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d9fng"] Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.116883 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.156667 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9fng"] Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.287576 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-utilities\") pod \"certified-operators-d9fng\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.287686 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-catalog-content\") pod \"certified-operators-d9fng\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.287731 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdp92\" (UniqueName: \"kubernetes.io/projected/bf922c0c-897c-464e-a039-95c9cff25d2c-kube-api-access-kdp92\") pod \"certified-operators-d9fng\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.390084 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-utilities\") pod \"certified-operators-d9fng\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.390163 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-catalog-content\") pod \"certified-operators-d9fng\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.390202 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdp92\" (UniqueName: \"kubernetes.io/projected/bf922c0c-897c-464e-a039-95c9cff25d2c-kube-api-access-kdp92\") pod \"certified-operators-d9fng\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.392333 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-utilities\") pod \"certified-operators-d9fng\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.392459 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-catalog-content\") pod \"certified-operators-d9fng\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.494282 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdp92\" (UniqueName: \"kubernetes.io/projected/bf922c0c-897c-464e-a039-95c9cff25d2c-kube-api-access-kdp92\") pod \"certified-operators-d9fng\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.525704 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409361-52tp6" event={"ID":"8679ecb1-41cb-4610-89f0-7047b4f2c7ff","Type":"ContainerStarted","Data":"f1d51de827062900bf76c37da84ede251e2877cf800d1ab23e3d67f2e3efaf48"} Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.546616 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29409361-52tp6" podStartSLOduration=2.546598611 podStartE2EDuration="2.546598611s" podCreationTimestamp="2025-12-01 04:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 04:01:02.540445214 +0000 UTC m=+3892.051699586" watchObservedRunningTime="2025-12-01 04:01:02.546598611 +0000 UTC m=+3892.057852973" Dec 01 04:01:02 crc kubenswrapper[4880]: I1201 04:01:02.753232 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:03 crc kubenswrapper[4880]: I1201 04:01:03.350189 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9fng"] Dec 01 04:01:03 crc kubenswrapper[4880]: W1201 04:01:03.363678 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf922c0c_897c_464e_a039_95c9cff25d2c.slice/crio-e97f0e56f80f8e6fa5b4f07b2bb85fc9f1fb767ff5b43d214399e2ee72c12443 WatchSource:0}: Error finding container e97f0e56f80f8e6fa5b4f07b2bb85fc9f1fb767ff5b43d214399e2ee72c12443: Status 404 returned error can't find the container with id e97f0e56f80f8e6fa5b4f07b2bb85fc9f1fb767ff5b43d214399e2ee72c12443 Dec 01 04:01:03 crc kubenswrapper[4880]: I1201 04:01:03.534839 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9fng" event={"ID":"bf922c0c-897c-464e-a039-95c9cff25d2c","Type":"ContainerStarted","Data":"e97f0e56f80f8e6fa5b4f07b2bb85fc9f1fb767ff5b43d214399e2ee72c12443"} Dec 01 04:01:04 crc kubenswrapper[4880]: I1201 04:01:04.548871 4880 generic.go:334] "Generic (PLEG): container finished" podID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerID="6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4" exitCode=0 Dec 01 04:01:04 crc kubenswrapper[4880]: I1201 04:01:04.549158 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9fng" event={"ID":"bf922c0c-897c-464e-a039-95c9cff25d2c","Type":"ContainerDied","Data":"6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4"} Dec 01 04:01:05 crc kubenswrapper[4880]: I1201 04:01:05.562849 4880 generic.go:334] "Generic (PLEG): container finished" podID="8679ecb1-41cb-4610-89f0-7047b4f2c7ff" containerID="f1d51de827062900bf76c37da84ede251e2877cf800d1ab23e3d67f2e3efaf48" exitCode=0 Dec 01 04:01:05 crc kubenswrapper[4880]: I1201 04:01:05.562970 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409361-52tp6" event={"ID":"8679ecb1-41cb-4610-89f0-7047b4f2c7ff","Type":"ContainerDied","Data":"f1d51de827062900bf76c37da84ede251e2877cf800d1ab23e3d67f2e3efaf48"} Dec 01 04:01:06 crc kubenswrapper[4880]: I1201 04:01:06.572969 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9fng" event={"ID":"bf922c0c-897c-464e-a039-95c9cff25d2c","Type":"ContainerStarted","Data":"c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca"} Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.077341 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.193801 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-combined-ca-bundle\") pod \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.194221 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-config-data\") pod \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.194333 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-fernet-keys\") pod \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.194508 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bqqt\" (UniqueName: \"kubernetes.io/projected/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-kube-api-access-6bqqt\") pod \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\" (UID: \"8679ecb1-41cb-4610-89f0-7047b4f2c7ff\") " Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.212595 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8679ecb1-41cb-4610-89f0-7047b4f2c7ff" (UID: "8679ecb1-41cb-4610-89f0-7047b4f2c7ff"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.253377 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-kube-api-access-6bqqt" (OuterVolumeSpecName: "kube-api-access-6bqqt") pod "8679ecb1-41cb-4610-89f0-7047b4f2c7ff" (UID: "8679ecb1-41cb-4610-89f0-7047b4f2c7ff"). InnerVolumeSpecName "kube-api-access-6bqqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.262382 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8679ecb1-41cb-4610-89f0-7047b4f2c7ff" (UID: "8679ecb1-41cb-4610-89f0-7047b4f2c7ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.278118 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-config-data" (OuterVolumeSpecName: "config-data") pod "8679ecb1-41cb-4610-89f0-7047b4f2c7ff" (UID: "8679ecb1-41cb-4610-89f0-7047b4f2c7ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.296175 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.296206 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.296214 4880 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.296223 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bqqt\" (UniqueName: \"kubernetes.io/projected/8679ecb1-41cb-4610-89f0-7047b4f2c7ff-kube-api-access-6bqqt\") on node \"crc\" DevicePath \"\"" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.584076 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409361-52tp6" event={"ID":"8679ecb1-41cb-4610-89f0-7047b4f2c7ff","Type":"ContainerDied","Data":"f47cdf861bb41949479304f5cf2539be399ab991c8762ff686c5d980f94f0b78"} Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.585351 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409361-52tp6" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.585314 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f47cdf861bb41949479304f5cf2539be399ab991c8762ff686c5d980f94f0b78" Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.587775 4880 generic.go:334] "Generic (PLEG): container finished" podID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerID="c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca" exitCode=0 Dec 01 04:01:07 crc kubenswrapper[4880]: I1201 04:01:07.588080 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9fng" event={"ID":"bf922c0c-897c-464e-a039-95c9cff25d2c","Type":"ContainerDied","Data":"c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca"} Dec 01 04:01:08 crc kubenswrapper[4880]: I1201 04:01:08.604505 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9fng" event={"ID":"bf922c0c-897c-464e-a039-95c9cff25d2c","Type":"ContainerStarted","Data":"0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca"} Dec 01 04:01:08 crc kubenswrapper[4880]: I1201 04:01:08.634384 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d9fng" podStartSLOduration=3.015992834 podStartE2EDuration="6.634364076s" podCreationTimestamp="2025-12-01 04:01:02 +0000 UTC" firstStartedPulling="2025-12-01 04:01:04.552384832 +0000 UTC m=+3894.063639244" lastFinishedPulling="2025-12-01 04:01:08.170756124 +0000 UTC m=+3897.682010486" observedRunningTime="2025-12-01 04:01:08.628621438 +0000 UTC m=+3898.139875830" watchObservedRunningTime="2025-12-01 04:01:08.634364076 +0000 UTC m=+3898.145618458" Dec 01 04:01:12 crc kubenswrapper[4880]: I1201 04:01:12.754661 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:12 crc kubenswrapper[4880]: I1201 04:01:12.755133 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:12 crc kubenswrapper[4880]: I1201 04:01:12.784768 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 04:01:12 crc kubenswrapper[4880]: E1201 04:01:12.785043 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:01:13 crc kubenswrapper[4880]: I1201 04:01:13.826161 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-d9fng" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerName="registry-server" probeResult="failure" output=< Dec 01 04:01:13 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:01:13 crc kubenswrapper[4880]: > Dec 01 04:01:22 crc kubenswrapper[4880]: I1201 04:01:22.805754 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:22 crc kubenswrapper[4880]: I1201 04:01:22.860270 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:23 crc kubenswrapper[4880]: I1201 04:01:23.068400 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9fng"] Dec 01 04:01:24 crc kubenswrapper[4880]: I1201 04:01:24.763069 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d9fng" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerName="registry-server" containerID="cri-o://0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca" gracePeriod=2 Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.572900 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.693135 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-catalog-content\") pod \"bf922c0c-897c-464e-a039-95c9cff25d2c\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.693474 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-utilities\") pod \"bf922c0c-897c-464e-a039-95c9cff25d2c\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.693574 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdp92\" (UniqueName: \"kubernetes.io/projected/bf922c0c-897c-464e-a039-95c9cff25d2c-kube-api-access-kdp92\") pod \"bf922c0c-897c-464e-a039-95c9cff25d2c\" (UID: \"bf922c0c-897c-464e-a039-95c9cff25d2c\") " Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.694654 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-utilities" (OuterVolumeSpecName: "utilities") pod "bf922c0c-897c-464e-a039-95c9cff25d2c" (UID: "bf922c0c-897c-464e-a039-95c9cff25d2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.709519 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf922c0c-897c-464e-a039-95c9cff25d2c-kube-api-access-kdp92" (OuterVolumeSpecName: "kube-api-access-kdp92") pod "bf922c0c-897c-464e-a039-95c9cff25d2c" (UID: "bf922c0c-897c-464e-a039-95c9cff25d2c"). InnerVolumeSpecName "kube-api-access-kdp92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.763449 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf922c0c-897c-464e-a039-95c9cff25d2c" (UID: "bf922c0c-897c-464e-a039-95c9cff25d2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.776062 4880 generic.go:334] "Generic (PLEG): container finished" podID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerID="0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca" exitCode=0 Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.776109 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9fng" event={"ID":"bf922c0c-897c-464e-a039-95c9cff25d2c","Type":"ContainerDied","Data":"0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca"} Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.776142 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9fng" event={"ID":"bf922c0c-897c-464e-a039-95c9cff25d2c","Type":"ContainerDied","Data":"e97f0e56f80f8e6fa5b4f07b2bb85fc9f1fb767ff5b43d214399e2ee72c12443"} Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.776165 4880 scope.go:117] "RemoveContainer" containerID="0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.776322 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9fng" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.795675 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdp92\" (UniqueName: \"kubernetes.io/projected/bf922c0c-897c-464e-a039-95c9cff25d2c-kube-api-access-kdp92\") on node \"crc\" DevicePath \"\"" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.795699 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.795708 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf922c0c-897c-464e-a039-95c9cff25d2c-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.827936 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9fng"] Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.837656 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d9fng"] Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.839420 4880 scope.go:117] "RemoveContainer" containerID="c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.861041 4880 scope.go:117] "RemoveContainer" containerID="6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.915938 4880 scope.go:117] "RemoveContainer" containerID="0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca" Dec 01 04:01:25 crc kubenswrapper[4880]: E1201 04:01:25.924730 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca\": container with ID starting with 0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca not found: ID does not exist" containerID="0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.924850 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca"} err="failed to get container status \"0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca\": rpc error: code = NotFound desc = could not find container \"0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca\": container with ID starting with 0fe3b946c7aee7d2ca2637d678bbb6f3656a38f21593c258d3da23eb9e16d0ca not found: ID does not exist" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.924901 4880 scope.go:117] "RemoveContainer" containerID="c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca" Dec 01 04:01:25 crc kubenswrapper[4880]: E1201 04:01:25.925669 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca\": container with ID starting with c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca not found: ID does not exist" containerID="c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.925713 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca"} err="failed to get container status \"c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca\": rpc error: code = NotFound desc = could not find container \"c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca\": container with ID starting with c2c6d7085512feb4772f1002d689d50aa6bccde28f0f6e355a2122af9c75a9ca not found: ID does not exist" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.925847 4880 scope.go:117] "RemoveContainer" containerID="6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4" Dec 01 04:01:25 crc kubenswrapper[4880]: E1201 04:01:25.926526 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4\": container with ID starting with 6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4 not found: ID does not exist" containerID="6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4" Dec 01 04:01:25 crc kubenswrapper[4880]: I1201 04:01:25.926551 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4"} err="failed to get container status \"6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4\": rpc error: code = NotFound desc = could not find container \"6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4\": container with ID starting with 6713bc1edb36c127f9738d9a267179bf1e07f60b0a3643375df583db75cba4d4 not found: ID does not exist" Dec 01 04:01:26 crc kubenswrapper[4880]: I1201 04:01:26.803144 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" path="/var/lib/kubelet/pods/bf922c0c-897c-464e-a039-95c9cff25d2c/volumes" Dec 01 04:01:27 crc kubenswrapper[4880]: I1201 04:01:27.784580 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 04:01:28 crc kubenswrapper[4880]: I1201 04:01:28.812946 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"349615ddaff46c2df8037526c7d6bb8262c3b5ef0ba8e46978cd07ca5511dfb8"} Dec 01 04:02:28 crc kubenswrapper[4880]: E1201 04:02:28.102694 4880 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:38128->38.102.83.39:42095: write tcp 38.102.83.39:38128->38.102.83.39:42095: write: connection reset by peer Dec 01 04:03:47 crc kubenswrapper[4880]: I1201 04:03:47.369422 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:03:47 crc kubenswrapper[4880]: I1201 04:03:47.370520 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:04:17 crc kubenswrapper[4880]: I1201 04:04:17.368562 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:04:17 crc kubenswrapper[4880]: I1201 04:04:17.369434 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:04:47 crc kubenswrapper[4880]: I1201 04:04:47.369245 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:04:47 crc kubenswrapper[4880]: I1201 04:04:47.370021 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:04:47 crc kubenswrapper[4880]: I1201 04:04:47.370853 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:04:47 crc kubenswrapper[4880]: I1201 04:04:47.372936 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"349615ddaff46c2df8037526c7d6bb8262c3b5ef0ba8e46978cd07ca5511dfb8"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:04:47 crc kubenswrapper[4880]: I1201 04:04:47.374642 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://349615ddaff46c2df8037526c7d6bb8262c3b5ef0ba8e46978cd07ca5511dfb8" gracePeriod=600 Dec 01 04:04:47 crc kubenswrapper[4880]: I1201 04:04:47.837229 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="349615ddaff46c2df8037526c7d6bb8262c3b5ef0ba8e46978cd07ca5511dfb8" exitCode=0 Dec 01 04:04:47 crc kubenswrapper[4880]: I1201 04:04:47.837495 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"349615ddaff46c2df8037526c7d6bb8262c3b5ef0ba8e46978cd07ca5511dfb8"} Dec 01 04:04:47 crc kubenswrapper[4880]: I1201 04:04:47.837681 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551"} Dec 01 04:04:47 crc kubenswrapper[4880]: I1201 04:04:47.838092 4880 scope.go:117] "RemoveContainer" containerID="e24948012e0b9e5d08c052c416a7c25edfae43d6c9f6c1e3fbd2666eee50a8ca" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.206829 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9jwfr"] Dec 01 04:05:38 crc kubenswrapper[4880]: E1201 04:05:38.208968 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerName="registry-server" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.208983 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerName="registry-server" Dec 01 04:05:38 crc kubenswrapper[4880]: E1201 04:05:38.209003 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerName="extract-content" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.209009 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerName="extract-content" Dec 01 04:05:38 crc kubenswrapper[4880]: E1201 04:05:38.209052 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerName="extract-utilities" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.209058 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerName="extract-utilities" Dec 01 04:05:38 crc kubenswrapper[4880]: E1201 04:05:38.209072 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8679ecb1-41cb-4610-89f0-7047b4f2c7ff" containerName="keystone-cron" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.209078 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8679ecb1-41cb-4610-89f0-7047b4f2c7ff" containerName="keystone-cron" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.209502 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8679ecb1-41cb-4610-89f0-7047b4f2c7ff" containerName="keystone-cron" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.209527 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf922c0c-897c-464e-a039-95c9cff25d2c" containerName="registry-server" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.212337 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.283204 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jwfr"] Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.306956 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-utilities\") pod \"community-operators-9jwfr\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.307119 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4blg5\" (UniqueName: \"kubernetes.io/projected/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-kube-api-access-4blg5\") pod \"community-operators-9jwfr\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.307162 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-catalog-content\") pod \"community-operators-9jwfr\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.408661 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4blg5\" (UniqueName: \"kubernetes.io/projected/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-kube-api-access-4blg5\") pod \"community-operators-9jwfr\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.408729 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-catalog-content\") pod \"community-operators-9jwfr\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.408848 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-utilities\") pod \"community-operators-9jwfr\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.410372 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-utilities\") pod \"community-operators-9jwfr\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.410711 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-catalog-content\") pod \"community-operators-9jwfr\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.437515 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4blg5\" (UniqueName: \"kubernetes.io/projected/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-kube-api-access-4blg5\") pod \"community-operators-9jwfr\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:38 crc kubenswrapper[4880]: I1201 04:05:38.546137 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:39 crc kubenswrapper[4880]: I1201 04:05:39.227817 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jwfr"] Dec 01 04:05:39 crc kubenswrapper[4880]: I1201 04:05:39.394327 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jwfr" event={"ID":"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b","Type":"ContainerStarted","Data":"d9159cd3820124fff271c65be9016aa858384fa767cef4fb1d922798f68a3dee"} Dec 01 04:05:40 crc kubenswrapper[4880]: I1201 04:05:40.403971 4880 generic.go:334] "Generic (PLEG): container finished" podID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerID="3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6" exitCode=0 Dec 01 04:05:40 crc kubenswrapper[4880]: I1201 04:05:40.404075 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jwfr" event={"ID":"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b","Type":"ContainerDied","Data":"3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6"} Dec 01 04:05:40 crc kubenswrapper[4880]: I1201 04:05:40.408720 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 04:05:41 crc kubenswrapper[4880]: I1201 04:05:41.414277 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jwfr" event={"ID":"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b","Type":"ContainerStarted","Data":"31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210"} Dec 01 04:05:43 crc kubenswrapper[4880]: I1201 04:05:43.452036 4880 generic.go:334] "Generic (PLEG): container finished" podID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerID="31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210" exitCode=0 Dec 01 04:05:43 crc kubenswrapper[4880]: I1201 04:05:43.452125 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jwfr" event={"ID":"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b","Type":"ContainerDied","Data":"31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210"} Dec 01 04:05:44 crc kubenswrapper[4880]: I1201 04:05:44.464129 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jwfr" event={"ID":"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b","Type":"ContainerStarted","Data":"81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed"} Dec 01 04:05:44 crc kubenswrapper[4880]: I1201 04:05:44.492970 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9jwfr" podStartSLOduration=3.010607134 podStartE2EDuration="6.492606319s" podCreationTimestamp="2025-12-01 04:05:38 +0000 UTC" firstStartedPulling="2025-12-01 04:05:40.407604384 +0000 UTC m=+4169.918858766" lastFinishedPulling="2025-12-01 04:05:43.889603579 +0000 UTC m=+4173.400857951" observedRunningTime="2025-12-01 04:05:44.484061102 +0000 UTC m=+4173.995315474" watchObservedRunningTime="2025-12-01 04:05:44.492606319 +0000 UTC m=+4174.003860691" Dec 01 04:05:48 crc kubenswrapper[4880]: I1201 04:05:48.546801 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:48 crc kubenswrapper[4880]: I1201 04:05:48.548022 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:48 crc kubenswrapper[4880]: I1201 04:05:48.596082 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:49 crc kubenswrapper[4880]: I1201 04:05:49.569220 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:49 crc kubenswrapper[4880]: I1201 04:05:49.620205 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jwfr"] Dec 01 04:05:51 crc kubenswrapper[4880]: I1201 04:05:51.527783 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9jwfr" podUID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerName="registry-server" containerID="cri-o://81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed" gracePeriod=2 Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.091065 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.207894 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4blg5\" (UniqueName: \"kubernetes.io/projected/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-kube-api-access-4blg5\") pod \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.208002 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-catalog-content\") pod \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.208251 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-utilities\") pod \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\" (UID: \"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b\") " Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.210427 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-utilities" (OuterVolumeSpecName: "utilities") pod "8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" (UID: "8b56badc-4fe9-46ae-83b8-d03dd8c0d94b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.221442 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-kube-api-access-4blg5" (OuterVolumeSpecName: "kube-api-access-4blg5") pod "8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" (UID: "8b56badc-4fe9-46ae-83b8-d03dd8c0d94b"). InnerVolumeSpecName "kube-api-access-4blg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.269057 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" (UID: "8b56badc-4fe9-46ae-83b8-d03dd8c0d94b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.311709 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4blg5\" (UniqueName: \"kubernetes.io/projected/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-kube-api-access-4blg5\") on node \"crc\" DevicePath \"\"" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.311746 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.311758 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.539377 4880 generic.go:334] "Generic (PLEG): container finished" podID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerID="81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed" exitCode=0 Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.539417 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jwfr" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.539432 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jwfr" event={"ID":"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b","Type":"ContainerDied","Data":"81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed"} Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.539590 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jwfr" event={"ID":"8b56badc-4fe9-46ae-83b8-d03dd8c0d94b","Type":"ContainerDied","Data":"d9159cd3820124fff271c65be9016aa858384fa767cef4fb1d922798f68a3dee"} Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.539611 4880 scope.go:117] "RemoveContainer" containerID="81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.568966 4880 scope.go:117] "RemoveContainer" containerID="31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210" Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.586034 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jwfr"] Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.608504 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9jwfr"] Dec 01 04:05:52 crc kubenswrapper[4880]: I1201 04:05:52.807701 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" path="/var/lib/kubelet/pods/8b56badc-4fe9-46ae-83b8-d03dd8c0d94b/volumes" Dec 01 04:05:53 crc kubenswrapper[4880]: I1201 04:05:53.000959 4880 scope.go:117] "RemoveContainer" containerID="3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6" Dec 01 04:05:53 crc kubenswrapper[4880]: I1201 04:05:53.258055 4880 scope.go:117] "RemoveContainer" containerID="81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed" Dec 01 04:05:53 crc kubenswrapper[4880]: E1201 04:05:53.258834 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed\": container with ID starting with 81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed not found: ID does not exist" containerID="81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed" Dec 01 04:05:53 crc kubenswrapper[4880]: I1201 04:05:53.259108 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed"} err="failed to get container status \"81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed\": rpc error: code = NotFound desc = could not find container \"81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed\": container with ID starting with 81a71f5da700d421e465306398e75a38410cc07d141758da60e1b54ddbcc16ed not found: ID does not exist" Dec 01 04:05:53 crc kubenswrapper[4880]: I1201 04:05:53.259223 4880 scope.go:117] "RemoveContainer" containerID="31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210" Dec 01 04:05:53 crc kubenswrapper[4880]: E1201 04:05:53.260309 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210\": container with ID starting with 31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210 not found: ID does not exist" containerID="31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210" Dec 01 04:05:53 crc kubenswrapper[4880]: I1201 04:05:53.260466 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210"} err="failed to get container status \"31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210\": rpc error: code = NotFound desc = could not find container \"31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210\": container with ID starting with 31b7bbf42559500b6c4698dca57fd9591bc7d9e1d453e0999f20804a9869c210 not found: ID does not exist" Dec 01 04:05:53 crc kubenswrapper[4880]: I1201 04:05:53.260572 4880 scope.go:117] "RemoveContainer" containerID="3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6" Dec 01 04:05:53 crc kubenswrapper[4880]: E1201 04:05:53.261071 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6\": container with ID starting with 3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6 not found: ID does not exist" containerID="3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6" Dec 01 04:05:53 crc kubenswrapper[4880]: I1201 04:05:53.261201 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6"} err="failed to get container status \"3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6\": rpc error: code = NotFound desc = could not find container \"3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6\": container with ID starting with 3f28357a1cd86e4b260048f22f5e57f881426d6eaa2e1ed671481d9f6060add6 not found: ID does not exist" Dec 01 04:06:47 crc kubenswrapper[4880]: I1201 04:06:47.368837 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:06:47 crc kubenswrapper[4880]: I1201 04:06:47.369462 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:07:17 crc kubenswrapper[4880]: I1201 04:07:17.368998 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:07:17 crc kubenswrapper[4880]: I1201 04:07:17.370732 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:07:47 crc kubenswrapper[4880]: I1201 04:07:47.369486 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:07:47 crc kubenswrapper[4880]: I1201 04:07:47.370149 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:07:47 crc kubenswrapper[4880]: I1201 04:07:47.370203 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:07:47 crc kubenswrapper[4880]: I1201 04:07:47.370983 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:07:47 crc kubenswrapper[4880]: I1201 04:07:47.371050 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" gracePeriod=600 Dec 01 04:07:47 crc kubenswrapper[4880]: E1201 04:07:47.508360 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:07:47 crc kubenswrapper[4880]: I1201 04:07:47.808949 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" exitCode=0 Dec 01 04:07:47 crc kubenswrapper[4880]: I1201 04:07:47.808996 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551"} Dec 01 04:07:47 crc kubenswrapper[4880]: I1201 04:07:47.809031 4880 scope.go:117] "RemoveContainer" containerID="349615ddaff46c2df8037526c7d6bb8262c3b5ef0ba8e46978cd07ca5511dfb8" Dec 01 04:07:47 crc kubenswrapper[4880]: I1201 04:07:47.809728 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:07:47 crc kubenswrapper[4880]: E1201 04:07:47.810033 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:07:59 crc kubenswrapper[4880]: I1201 04:07:59.784276 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:07:59 crc kubenswrapper[4880]: E1201 04:07:59.785027 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:08:13 crc kubenswrapper[4880]: I1201 04:08:13.784683 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:08:13 crc kubenswrapper[4880]: E1201 04:08:13.785495 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:08:24 crc kubenswrapper[4880]: I1201 04:08:24.784577 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:08:24 crc kubenswrapper[4880]: E1201 04:08:24.786399 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:08:36 crc kubenswrapper[4880]: I1201 04:08:36.784209 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:08:36 crc kubenswrapper[4880]: E1201 04:08:36.785083 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:08:51 crc kubenswrapper[4880]: I1201 04:08:51.785013 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:08:51 crc kubenswrapper[4880]: E1201 04:08:51.786668 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:09:06 crc kubenswrapper[4880]: I1201 04:09:06.783801 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:09:06 crc kubenswrapper[4880]: E1201 04:09:06.784407 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:09:20 crc kubenswrapper[4880]: I1201 04:09:20.789703 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:09:20 crc kubenswrapper[4880]: E1201 04:09:20.790419 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:09:35 crc kubenswrapper[4880]: I1201 04:09:35.783621 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:09:35 crc kubenswrapper[4880]: E1201 04:09:35.784343 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:09:46 crc kubenswrapper[4880]: I1201 04:09:46.784665 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:09:46 crc kubenswrapper[4880]: E1201 04:09:46.787843 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:09:59 crc kubenswrapper[4880]: I1201 04:09:59.784065 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:09:59 crc kubenswrapper[4880]: E1201 04:09:59.784806 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.272957 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fmgwj"] Dec 01 04:10:08 crc kubenswrapper[4880]: E1201 04:10:08.274410 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerName="extract-utilities" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.274428 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerName="extract-utilities" Dec 01 04:10:08 crc kubenswrapper[4880]: E1201 04:10:08.274460 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerName="registry-server" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.274470 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerName="registry-server" Dec 01 04:10:08 crc kubenswrapper[4880]: E1201 04:10:08.274483 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerName="extract-content" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.274492 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerName="extract-content" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.275008 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b56badc-4fe9-46ae-83b8-d03dd8c0d94b" containerName="registry-server" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.277346 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.329728 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-utilities\") pod \"redhat-operators-fmgwj\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.329791 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-catalog-content\") pod \"redhat-operators-fmgwj\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.329845 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2c8\" (UniqueName: \"kubernetes.io/projected/016a4e89-f395-44bc-bfa1-97f11a1668b7-kube-api-access-xr2c8\") pod \"redhat-operators-fmgwj\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.352977 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmgwj"] Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.430995 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2c8\" (UniqueName: \"kubernetes.io/projected/016a4e89-f395-44bc-bfa1-97f11a1668b7-kube-api-access-xr2c8\") pod \"redhat-operators-fmgwj\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.431129 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-utilities\") pod \"redhat-operators-fmgwj\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.431173 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-catalog-content\") pod \"redhat-operators-fmgwj\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.432222 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-catalog-content\") pod \"redhat-operators-fmgwj\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.432322 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-utilities\") pod \"redhat-operators-fmgwj\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.454763 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2c8\" (UniqueName: \"kubernetes.io/projected/016a4e89-f395-44bc-bfa1-97f11a1668b7-kube-api-access-xr2c8\") pod \"redhat-operators-fmgwj\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:08 crc kubenswrapper[4880]: I1201 04:10:08.611536 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:09 crc kubenswrapper[4880]: I1201 04:10:09.336763 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fmgwj"] Dec 01 04:10:10 crc kubenswrapper[4880]: I1201 04:10:10.237246 4880 generic.go:334] "Generic (PLEG): container finished" podID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerID="4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3" exitCode=0 Dec 01 04:10:10 crc kubenswrapper[4880]: I1201 04:10:10.237297 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmgwj" event={"ID":"016a4e89-f395-44bc-bfa1-97f11a1668b7","Type":"ContainerDied","Data":"4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3"} Dec 01 04:10:10 crc kubenswrapper[4880]: I1201 04:10:10.237713 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmgwj" event={"ID":"016a4e89-f395-44bc-bfa1-97f11a1668b7","Type":"ContainerStarted","Data":"e773be3bcef087f29e6f268b4b7c0394aaa05009b699faf7f213d092c67cc198"} Dec 01 04:10:11 crc kubenswrapper[4880]: I1201 04:10:11.249467 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmgwj" event={"ID":"016a4e89-f395-44bc-bfa1-97f11a1668b7","Type":"ContainerStarted","Data":"ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053"} Dec 01 04:10:12 crc kubenswrapper[4880]: I1201 04:10:12.785468 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:10:12 crc kubenswrapper[4880]: E1201 04:10:12.786039 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:10:14 crc kubenswrapper[4880]: I1201 04:10:14.311058 4880 generic.go:334] "Generic (PLEG): container finished" podID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerID="ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053" exitCode=0 Dec 01 04:10:14 crc kubenswrapper[4880]: I1201 04:10:14.311141 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmgwj" event={"ID":"016a4e89-f395-44bc-bfa1-97f11a1668b7","Type":"ContainerDied","Data":"ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053"} Dec 01 04:10:15 crc kubenswrapper[4880]: I1201 04:10:15.321640 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmgwj" event={"ID":"016a4e89-f395-44bc-bfa1-97f11a1668b7","Type":"ContainerStarted","Data":"3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d"} Dec 01 04:10:15 crc kubenswrapper[4880]: I1201 04:10:15.345062 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fmgwj" podStartSLOduration=2.75794463 podStartE2EDuration="7.344015696s" podCreationTimestamp="2025-12-01 04:10:08 +0000 UTC" firstStartedPulling="2025-12-01 04:10:10.239610947 +0000 UTC m=+4439.750865309" lastFinishedPulling="2025-12-01 04:10:14.825681983 +0000 UTC m=+4444.336936375" observedRunningTime="2025-12-01 04:10:15.336550594 +0000 UTC m=+4444.847804966" watchObservedRunningTime="2025-12-01 04:10:15.344015696 +0000 UTC m=+4444.855270068" Dec 01 04:10:18 crc kubenswrapper[4880]: I1201 04:10:18.612237 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:18 crc kubenswrapper[4880]: I1201 04:10:18.613771 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:19 crc kubenswrapper[4880]: I1201 04:10:19.674274 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fmgwj" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerName="registry-server" probeResult="failure" output=< Dec 01 04:10:19 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:10:19 crc kubenswrapper[4880]: > Dec 01 04:10:26 crc kubenswrapper[4880]: I1201 04:10:26.784537 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:10:26 crc kubenswrapper[4880]: E1201 04:10:26.785029 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:10:28 crc kubenswrapper[4880]: I1201 04:10:28.764211 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:28 crc kubenswrapper[4880]: I1201 04:10:28.831749 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:29 crc kubenswrapper[4880]: I1201 04:10:29.006677 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fmgwj"] Dec 01 04:10:30 crc kubenswrapper[4880]: I1201 04:10:30.472122 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fmgwj" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerName="registry-server" containerID="cri-o://3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d" gracePeriod=2 Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.055236 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.131678 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr2c8\" (UniqueName: \"kubernetes.io/projected/016a4e89-f395-44bc-bfa1-97f11a1668b7-kube-api-access-xr2c8\") pod \"016a4e89-f395-44bc-bfa1-97f11a1668b7\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.131745 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-catalog-content\") pod \"016a4e89-f395-44bc-bfa1-97f11a1668b7\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.131809 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-utilities\") pod \"016a4e89-f395-44bc-bfa1-97f11a1668b7\" (UID: \"016a4e89-f395-44bc-bfa1-97f11a1668b7\") " Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.132687 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-utilities" (OuterVolumeSpecName: "utilities") pod "016a4e89-f395-44bc-bfa1-97f11a1668b7" (UID: "016a4e89-f395-44bc-bfa1-97f11a1668b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.150103 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016a4e89-f395-44bc-bfa1-97f11a1668b7-kube-api-access-xr2c8" (OuterVolumeSpecName: "kube-api-access-xr2c8") pod "016a4e89-f395-44bc-bfa1-97f11a1668b7" (UID: "016a4e89-f395-44bc-bfa1-97f11a1668b7"). InnerVolumeSpecName "kube-api-access-xr2c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.230689 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "016a4e89-f395-44bc-bfa1-97f11a1668b7" (UID: "016a4e89-f395-44bc-bfa1-97f11a1668b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.234439 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr2c8\" (UniqueName: \"kubernetes.io/projected/016a4e89-f395-44bc-bfa1-97f11a1668b7-kube-api-access-xr2c8\") on node \"crc\" DevicePath \"\"" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.234487 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.234497 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/016a4e89-f395-44bc-bfa1-97f11a1668b7-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.482972 4880 generic.go:334] "Generic (PLEG): container finished" podID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerID="3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d" exitCode=0 Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.483315 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmgwj" event={"ID":"016a4e89-f395-44bc-bfa1-97f11a1668b7","Type":"ContainerDied","Data":"3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d"} Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.483400 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fmgwj" event={"ID":"016a4e89-f395-44bc-bfa1-97f11a1668b7","Type":"ContainerDied","Data":"e773be3bcef087f29e6f268b4b7c0394aaa05009b699faf7f213d092c67cc198"} Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.483434 4880 scope.go:117] "RemoveContainer" containerID="3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.483439 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fmgwj" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.515541 4880 scope.go:117] "RemoveContainer" containerID="ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.557924 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fmgwj"] Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.558658 4880 scope.go:117] "RemoveContainer" containerID="4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.569386 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fmgwj"] Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.619620 4880 scope.go:117] "RemoveContainer" containerID="3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d" Dec 01 04:10:31 crc kubenswrapper[4880]: E1201 04:10:31.620734 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d\": container with ID starting with 3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d not found: ID does not exist" containerID="3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.620923 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d"} err="failed to get container status \"3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d\": rpc error: code = NotFound desc = could not find container \"3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d\": container with ID starting with 3dda64ff87ac2af0b6cf145e097c665c08178a9beaab1f6fb226049a9d74364d not found: ID does not exist" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.621042 4880 scope.go:117] "RemoveContainer" containerID="ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053" Dec 01 04:10:31 crc kubenswrapper[4880]: E1201 04:10:31.623420 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053\": container with ID starting with ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053 not found: ID does not exist" containerID="ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.623484 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053"} err="failed to get container status \"ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053\": rpc error: code = NotFound desc = could not find container \"ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053\": container with ID starting with ffac6cd78b222ab2d411153f030a4c6c29221b468f5b385b1ef3333f9497c053 not found: ID does not exist" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.623529 4880 scope.go:117] "RemoveContainer" containerID="4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3" Dec 01 04:10:31 crc kubenswrapper[4880]: E1201 04:10:31.624122 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3\": container with ID starting with 4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3 not found: ID does not exist" containerID="4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3" Dec 01 04:10:31 crc kubenswrapper[4880]: I1201 04:10:31.624280 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3"} err="failed to get container status \"4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3\": rpc error: code = NotFound desc = could not find container \"4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3\": container with ID starting with 4dd9cce583e7ee46f33a0a4bde39b447f4eebb69aa5efef23e4b765c2227e1d3 not found: ID does not exist" Dec 01 04:10:32 crc kubenswrapper[4880]: I1201 04:10:32.797090 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" path="/var/lib/kubelet/pods/016a4e89-f395-44bc-bfa1-97f11a1668b7/volumes" Dec 01 04:10:41 crc kubenswrapper[4880]: I1201 04:10:41.785121 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:10:41 crc kubenswrapper[4880]: E1201 04:10:41.786126 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:10:53 crc kubenswrapper[4880]: I1201 04:10:53.784044 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:10:53 crc kubenswrapper[4880]: E1201 04:10:53.785095 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:11:06 crc kubenswrapper[4880]: I1201 04:11:06.783749 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:11:06 crc kubenswrapper[4880]: E1201 04:11:06.784728 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.878590 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ltqmd"] Dec 01 04:11:11 crc kubenswrapper[4880]: E1201 04:11:11.879515 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerName="extract-content" Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.879530 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerName="extract-content" Dec 01 04:11:11 crc kubenswrapper[4880]: E1201 04:11:11.879550 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerName="registry-server" Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.879558 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerName="registry-server" Dec 01 04:11:11 crc kubenswrapper[4880]: E1201 04:11:11.879595 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerName="extract-utilities" Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.879602 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerName="extract-utilities" Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.879830 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="016a4e89-f395-44bc-bfa1-97f11a1668b7" containerName="registry-server" Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.884901 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.895459 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltqmd"] Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.988297 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-catalog-content\") pod \"redhat-marketplace-ltqmd\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.988341 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2sgr\" (UniqueName: \"kubernetes.io/projected/91cb69e7-8498-4579-a154-763ef479220d-kube-api-access-x2sgr\") pod \"redhat-marketplace-ltqmd\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:11 crc kubenswrapper[4880]: I1201 04:11:11.988375 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-utilities\") pod \"redhat-marketplace-ltqmd\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:12 crc kubenswrapper[4880]: I1201 04:11:12.090058 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-catalog-content\") pod \"redhat-marketplace-ltqmd\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:12 crc kubenswrapper[4880]: I1201 04:11:12.090324 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2sgr\" (UniqueName: \"kubernetes.io/projected/91cb69e7-8498-4579-a154-763ef479220d-kube-api-access-x2sgr\") pod \"redhat-marketplace-ltqmd\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:12 crc kubenswrapper[4880]: I1201 04:11:12.090443 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-utilities\") pod \"redhat-marketplace-ltqmd\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:12 crc kubenswrapper[4880]: I1201 04:11:12.090646 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-catalog-content\") pod \"redhat-marketplace-ltqmd\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:12 crc kubenswrapper[4880]: I1201 04:11:12.090968 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-utilities\") pod \"redhat-marketplace-ltqmd\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:12 crc kubenswrapper[4880]: I1201 04:11:12.119225 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2sgr\" (UniqueName: \"kubernetes.io/projected/91cb69e7-8498-4579-a154-763ef479220d-kube-api-access-x2sgr\") pod \"redhat-marketplace-ltqmd\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:12 crc kubenswrapper[4880]: I1201 04:11:12.219238 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:13 crc kubenswrapper[4880]: I1201 04:11:12.713823 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltqmd"] Dec 01 04:11:13 crc kubenswrapper[4880]: I1201 04:11:12.975379 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltqmd" event={"ID":"91cb69e7-8498-4579-a154-763ef479220d","Type":"ContainerStarted","Data":"558c076e7c9e4cab143f1f5eca8184f2ab08beb798497d51329a5a9e8789f6df"} Dec 01 04:11:13 crc kubenswrapper[4880]: I1201 04:11:12.975647 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltqmd" event={"ID":"91cb69e7-8498-4579-a154-763ef479220d","Type":"ContainerStarted","Data":"c9d06dc85ff011401ec306cdad0f4091f6aa374f3326b342c0d791d6dc468d00"} Dec 01 04:11:13 crc kubenswrapper[4880]: I1201 04:11:13.988592 4880 generic.go:334] "Generic (PLEG): container finished" podID="91cb69e7-8498-4579-a154-763ef479220d" containerID="558c076e7c9e4cab143f1f5eca8184f2ab08beb798497d51329a5a9e8789f6df" exitCode=0 Dec 01 04:11:13 crc kubenswrapper[4880]: I1201 04:11:13.988676 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltqmd" event={"ID":"91cb69e7-8498-4579-a154-763ef479220d","Type":"ContainerDied","Data":"558c076e7c9e4cab143f1f5eca8184f2ab08beb798497d51329a5a9e8789f6df"} Dec 01 04:11:13 crc kubenswrapper[4880]: I1201 04:11:13.991575 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 04:11:15 crc kubenswrapper[4880]: I1201 04:11:14.999848 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltqmd" event={"ID":"91cb69e7-8498-4579-a154-763ef479220d","Type":"ContainerStarted","Data":"e3eae91cf90bb97dd392d80399c70be388333c7c5094818cc7353f3a7019d0c2"} Dec 01 04:11:16 crc kubenswrapper[4880]: I1201 04:11:16.038439 4880 generic.go:334] "Generic (PLEG): container finished" podID="91cb69e7-8498-4579-a154-763ef479220d" containerID="e3eae91cf90bb97dd392d80399c70be388333c7c5094818cc7353f3a7019d0c2" exitCode=0 Dec 01 04:11:16 crc kubenswrapper[4880]: I1201 04:11:16.038809 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltqmd" event={"ID":"91cb69e7-8498-4579-a154-763ef479220d","Type":"ContainerDied","Data":"e3eae91cf90bb97dd392d80399c70be388333c7c5094818cc7353f3a7019d0c2"} Dec 01 04:11:17 crc kubenswrapper[4880]: I1201 04:11:17.050137 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltqmd" event={"ID":"91cb69e7-8498-4579-a154-763ef479220d","Type":"ContainerStarted","Data":"5ae0e34ac4366dc5d98326eb52fed758872d9c5a48a663cedcd59cd4d998cf9c"} Dec 01 04:11:17 crc kubenswrapper[4880]: I1201 04:11:17.075623 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ltqmd" podStartSLOduration=3.545411053 podStartE2EDuration="6.075606628s" podCreationTimestamp="2025-12-01 04:11:11 +0000 UTC" firstStartedPulling="2025-12-01 04:11:13.991179528 +0000 UTC m=+4503.502433940" lastFinishedPulling="2025-12-01 04:11:16.521375103 +0000 UTC m=+4506.032629515" observedRunningTime="2025-12-01 04:11:17.07076616 +0000 UTC m=+4506.582020532" watchObservedRunningTime="2025-12-01 04:11:17.075606628 +0000 UTC m=+4506.586861000" Dec 01 04:11:20 crc kubenswrapper[4880]: I1201 04:11:20.801038 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:11:20 crc kubenswrapper[4880]: E1201 04:11:20.802807 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:11:22 crc kubenswrapper[4880]: I1201 04:11:22.219603 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:22 crc kubenswrapper[4880]: I1201 04:11:22.220621 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:22 crc kubenswrapper[4880]: I1201 04:11:22.292738 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:23 crc kubenswrapper[4880]: I1201 04:11:23.157077 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:23 crc kubenswrapper[4880]: I1201 04:11:23.209125 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltqmd"] Dec 01 04:11:25 crc kubenswrapper[4880]: I1201 04:11:25.127482 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ltqmd" podUID="91cb69e7-8498-4579-a154-763ef479220d" containerName="registry-server" containerID="cri-o://5ae0e34ac4366dc5d98326eb52fed758872d9c5a48a663cedcd59cd4d998cf9c" gracePeriod=2 Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.137082 4880 generic.go:334] "Generic (PLEG): container finished" podID="91cb69e7-8498-4579-a154-763ef479220d" containerID="5ae0e34ac4366dc5d98326eb52fed758872d9c5a48a663cedcd59cd4d998cf9c" exitCode=0 Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.137124 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltqmd" event={"ID":"91cb69e7-8498-4579-a154-763ef479220d","Type":"ContainerDied","Data":"5ae0e34ac4366dc5d98326eb52fed758872d9c5a48a663cedcd59cd4d998cf9c"} Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.518209 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.644908 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-utilities\") pod \"91cb69e7-8498-4579-a154-763ef479220d\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.645149 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2sgr\" (UniqueName: \"kubernetes.io/projected/91cb69e7-8498-4579-a154-763ef479220d-kube-api-access-x2sgr\") pod \"91cb69e7-8498-4579-a154-763ef479220d\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.645244 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-catalog-content\") pod \"91cb69e7-8498-4579-a154-763ef479220d\" (UID: \"91cb69e7-8498-4579-a154-763ef479220d\") " Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.645533 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-utilities" (OuterVolumeSpecName: "utilities") pod "91cb69e7-8498-4579-a154-763ef479220d" (UID: "91cb69e7-8498-4579-a154-763ef479220d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.645985 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.654612 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91cb69e7-8498-4579-a154-763ef479220d-kube-api-access-x2sgr" (OuterVolumeSpecName: "kube-api-access-x2sgr") pod "91cb69e7-8498-4579-a154-763ef479220d" (UID: "91cb69e7-8498-4579-a154-763ef479220d"). InnerVolumeSpecName "kube-api-access-x2sgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.664752 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91cb69e7-8498-4579-a154-763ef479220d" (UID: "91cb69e7-8498-4579-a154-763ef479220d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.747671 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2sgr\" (UniqueName: \"kubernetes.io/projected/91cb69e7-8498-4579-a154-763ef479220d-kube-api-access-x2sgr\") on node \"crc\" DevicePath \"\"" Dec 01 04:11:26 crc kubenswrapper[4880]: I1201 04:11:26.747892 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91cb69e7-8498-4579-a154-763ef479220d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:11:27 crc kubenswrapper[4880]: I1201 04:11:27.150141 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ltqmd" event={"ID":"91cb69e7-8498-4579-a154-763ef479220d","Type":"ContainerDied","Data":"c9d06dc85ff011401ec306cdad0f4091f6aa374f3326b342c0d791d6dc468d00"} Dec 01 04:11:27 crc kubenswrapper[4880]: I1201 04:11:27.150220 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ltqmd" Dec 01 04:11:27 crc kubenswrapper[4880]: I1201 04:11:27.150523 4880 scope.go:117] "RemoveContainer" containerID="5ae0e34ac4366dc5d98326eb52fed758872d9c5a48a663cedcd59cd4d998cf9c" Dec 01 04:11:27 crc kubenswrapper[4880]: I1201 04:11:27.174770 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltqmd"] Dec 01 04:11:27 crc kubenswrapper[4880]: I1201 04:11:27.178237 4880 scope.go:117] "RemoveContainer" containerID="e3eae91cf90bb97dd392d80399c70be388333c7c5094818cc7353f3a7019d0c2" Dec 01 04:11:27 crc kubenswrapper[4880]: I1201 04:11:27.182346 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ltqmd"] Dec 01 04:11:27 crc kubenswrapper[4880]: I1201 04:11:27.609218 4880 scope.go:117] "RemoveContainer" containerID="558c076e7c9e4cab143f1f5eca8184f2ab08beb798497d51329a5a9e8789f6df" Dec 01 04:11:28 crc kubenswrapper[4880]: I1201 04:11:28.799633 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91cb69e7-8498-4579-a154-763ef479220d" path="/var/lib/kubelet/pods/91cb69e7-8498-4579-a154-763ef479220d/volumes" Dec 01 04:11:32 crc kubenswrapper[4880]: I1201 04:11:32.783826 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:11:32 crc kubenswrapper[4880]: E1201 04:11:32.784496 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:11:44 crc kubenswrapper[4880]: I1201 04:11:44.783934 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:11:44 crc kubenswrapper[4880]: E1201 04:11:44.784821 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:11:55 crc kubenswrapper[4880]: I1201 04:11:55.784271 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:11:55 crc kubenswrapper[4880]: E1201 04:11:55.785509 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:12:09 crc kubenswrapper[4880]: I1201 04:12:09.784212 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:12:09 crc kubenswrapper[4880]: E1201 04:12:09.785107 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:12:21 crc kubenswrapper[4880]: I1201 04:12:21.784460 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:12:21 crc kubenswrapper[4880]: E1201 04:12:21.785114 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:12:32 crc kubenswrapper[4880]: I1201 04:12:32.787756 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:12:32 crc kubenswrapper[4880]: E1201 04:12:32.788806 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:12:44 crc kubenswrapper[4880]: I1201 04:12:44.785333 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:12:44 crc kubenswrapper[4880]: E1201 04:12:44.786189 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:12:59 crc kubenswrapper[4880]: I1201 04:12:59.784678 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:13:00 crc kubenswrapper[4880]: I1201 04:13:00.119292 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"997baa5b4ed451ba7c78e8285dcf8e3d295f8f8975ddcdd68231c8c4f4eed108"} Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.276834 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhqr7"] Dec 01 04:14:38 crc kubenswrapper[4880]: E1201 04:14:38.278074 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cb69e7-8498-4579-a154-763ef479220d" containerName="registry-server" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.278100 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cb69e7-8498-4579-a154-763ef479220d" containerName="registry-server" Dec 01 04:14:38 crc kubenswrapper[4880]: E1201 04:14:38.278140 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cb69e7-8498-4579-a154-763ef479220d" containerName="extract-utilities" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.278154 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cb69e7-8498-4579-a154-763ef479220d" containerName="extract-utilities" Dec 01 04:14:38 crc kubenswrapper[4880]: E1201 04:14:38.278207 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91cb69e7-8498-4579-a154-763ef479220d" containerName="extract-content" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.278221 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="91cb69e7-8498-4579-a154-763ef479220d" containerName="extract-content" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.278651 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="91cb69e7-8498-4579-a154-763ef479220d" containerName="registry-server" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.281379 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.298140 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhqr7"] Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.391256 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-utilities\") pod \"certified-operators-rhqr7\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.391368 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-catalog-content\") pod \"certified-operators-rhqr7\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.391454 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85d82\" (UniqueName: \"kubernetes.io/projected/d7a6c822-5a51-40d3-9775-a0c0f5d24874-kube-api-access-85d82\") pod \"certified-operators-rhqr7\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.492568 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-catalog-content\") pod \"certified-operators-rhqr7\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.492672 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85d82\" (UniqueName: \"kubernetes.io/projected/d7a6c822-5a51-40d3-9775-a0c0f5d24874-kube-api-access-85d82\") pod \"certified-operators-rhqr7\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.492697 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-utilities\") pod \"certified-operators-rhqr7\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.493206 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-catalog-content\") pod \"certified-operators-rhqr7\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.493234 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-utilities\") pod \"certified-operators-rhqr7\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.522900 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85d82\" (UniqueName: \"kubernetes.io/projected/d7a6c822-5a51-40d3-9775-a0c0f5d24874-kube-api-access-85d82\") pod \"certified-operators-rhqr7\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:38 crc kubenswrapper[4880]: I1201 04:14:38.616437 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:39 crc kubenswrapper[4880]: I1201 04:14:39.081515 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhqr7"] Dec 01 04:14:39 crc kubenswrapper[4880]: I1201 04:14:39.147916 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhqr7" event={"ID":"d7a6c822-5a51-40d3-9775-a0c0f5d24874","Type":"ContainerStarted","Data":"9cb9b82908ee5e322cb7175559102bc4c8cd16be71c92a0b482d2721408bc5fb"} Dec 01 04:14:40 crc kubenswrapper[4880]: I1201 04:14:40.160482 4880 generic.go:334] "Generic (PLEG): container finished" podID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerID="64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c" exitCode=0 Dec 01 04:14:40 crc kubenswrapper[4880]: I1201 04:14:40.160583 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhqr7" event={"ID":"d7a6c822-5a51-40d3-9775-a0c0f5d24874","Type":"ContainerDied","Data":"64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c"} Dec 01 04:14:42 crc kubenswrapper[4880]: I1201 04:14:42.184195 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhqr7" event={"ID":"d7a6c822-5a51-40d3-9775-a0c0f5d24874","Type":"ContainerStarted","Data":"aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00"} Dec 01 04:14:43 crc kubenswrapper[4880]: I1201 04:14:43.199173 4880 generic.go:334] "Generic (PLEG): container finished" podID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerID="aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00" exitCode=0 Dec 01 04:14:43 crc kubenswrapper[4880]: I1201 04:14:43.199219 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhqr7" event={"ID":"d7a6c822-5a51-40d3-9775-a0c0f5d24874","Type":"ContainerDied","Data":"aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00"} Dec 01 04:14:44 crc kubenswrapper[4880]: I1201 04:14:44.211967 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhqr7" event={"ID":"d7a6c822-5a51-40d3-9775-a0c0f5d24874","Type":"ContainerStarted","Data":"69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb"} Dec 01 04:14:44 crc kubenswrapper[4880]: I1201 04:14:44.236921 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhqr7" podStartSLOduration=2.675112993 podStartE2EDuration="6.236901488s" podCreationTimestamp="2025-12-01 04:14:38 +0000 UTC" firstStartedPulling="2025-12-01 04:14:40.162696424 +0000 UTC m=+4709.673950826" lastFinishedPulling="2025-12-01 04:14:43.724484949 +0000 UTC m=+4713.235739321" observedRunningTime="2025-12-01 04:14:44.233635758 +0000 UTC m=+4713.744890160" watchObservedRunningTime="2025-12-01 04:14:44.236901488 +0000 UTC m=+4713.748155870" Dec 01 04:14:48 crc kubenswrapper[4880]: I1201 04:14:48.617224 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:48 crc kubenswrapper[4880]: I1201 04:14:48.618600 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:48 crc kubenswrapper[4880]: I1201 04:14:48.685139 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:49 crc kubenswrapper[4880]: I1201 04:14:49.335262 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:49 crc kubenswrapper[4880]: I1201 04:14:49.648526 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhqr7"] Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.283251 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhqr7" podUID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerName="registry-server" containerID="cri-o://69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb" gracePeriod=2 Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.810770 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.874689 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-catalog-content\") pod \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.875007 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85d82\" (UniqueName: \"kubernetes.io/projected/d7a6c822-5a51-40d3-9775-a0c0f5d24874-kube-api-access-85d82\") pod \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.875072 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-utilities\") pod \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\" (UID: \"d7a6c822-5a51-40d3-9775-a0c0f5d24874\") " Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.877172 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-utilities" (OuterVolumeSpecName: "utilities") pod "d7a6c822-5a51-40d3-9775-a0c0f5d24874" (UID: "d7a6c822-5a51-40d3-9775-a0c0f5d24874"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.883523 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a6c822-5a51-40d3-9775-a0c0f5d24874-kube-api-access-85d82" (OuterVolumeSpecName: "kube-api-access-85d82") pod "d7a6c822-5a51-40d3-9775-a0c0f5d24874" (UID: "d7a6c822-5a51-40d3-9775-a0c0f5d24874"). InnerVolumeSpecName "kube-api-access-85d82". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.980736 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85d82\" (UniqueName: \"kubernetes.io/projected/d7a6c822-5a51-40d3-9775-a0c0f5d24874-kube-api-access-85d82\") on node \"crc\" DevicePath \"\"" Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.980771 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:14:51 crc kubenswrapper[4880]: I1201 04:14:51.982204 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7a6c822-5a51-40d3-9775-a0c0f5d24874" (UID: "d7a6c822-5a51-40d3-9775-a0c0f5d24874"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.082510 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7a6c822-5a51-40d3-9775-a0c0f5d24874-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.294021 4880 generic.go:334] "Generic (PLEG): container finished" podID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerID="69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb" exitCode=0 Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.294061 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhqr7" event={"ID":"d7a6c822-5a51-40d3-9775-a0c0f5d24874","Type":"ContainerDied","Data":"69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb"} Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.294088 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhqr7" event={"ID":"d7a6c822-5a51-40d3-9775-a0c0f5d24874","Type":"ContainerDied","Data":"9cb9b82908ee5e322cb7175559102bc4c8cd16be71c92a0b482d2721408bc5fb"} Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.294106 4880 scope.go:117] "RemoveContainer" containerID="69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.294233 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhqr7" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.316581 4880 scope.go:117] "RemoveContainer" containerID="aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.338494 4880 scope.go:117] "RemoveContainer" containerID="64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.342267 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhqr7"] Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.349397 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhqr7"] Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.383389 4880 scope.go:117] "RemoveContainer" containerID="69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb" Dec 01 04:14:52 crc kubenswrapper[4880]: E1201 04:14:52.383835 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb\": container with ID starting with 69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb not found: ID does not exist" containerID="69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.383905 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb"} err="failed to get container status \"69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb\": rpc error: code = NotFound desc = could not find container \"69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb\": container with ID starting with 69e1219355cbdd1638d15fcf8f8b5d766c61b4da9cc678292f25a01e81145ccb not found: ID does not exist" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.383942 4880 scope.go:117] "RemoveContainer" containerID="aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00" Dec 01 04:14:52 crc kubenswrapper[4880]: E1201 04:14:52.384259 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00\": container with ID starting with aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00 not found: ID does not exist" containerID="aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.384293 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00"} err="failed to get container status \"aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00\": rpc error: code = NotFound desc = could not find container \"aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00\": container with ID starting with aa9256100e7598bdad29eb5f178fa16b10986b0d3031c6c856438a0546191c00 not found: ID does not exist" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.384318 4880 scope.go:117] "RemoveContainer" containerID="64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c" Dec 01 04:14:52 crc kubenswrapper[4880]: E1201 04:14:52.384695 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c\": container with ID starting with 64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c not found: ID does not exist" containerID="64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.384723 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c"} err="failed to get container status \"64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c\": rpc error: code = NotFound desc = could not find container \"64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c\": container with ID starting with 64f72ad0cd798d2f8243b6a0a8d2fc3ea766ce8a4044e23628fa390b11c7734c not found: ID does not exist" Dec 01 04:14:52 crc kubenswrapper[4880]: I1201 04:14:52.793860 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" path="/var/lib/kubelet/pods/d7a6c822-5a51-40d3-9775-a0c0f5d24874/volumes" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.197754 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t"] Dec 01 04:15:00 crc kubenswrapper[4880]: E1201 04:15:00.199140 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerName="registry-server" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.199166 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerName="registry-server" Dec 01 04:15:00 crc kubenswrapper[4880]: E1201 04:15:00.199202 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerName="extract-content" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.199217 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerName="extract-content" Dec 01 04:15:00 crc kubenswrapper[4880]: E1201 04:15:00.199239 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerName="extract-utilities" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.199252 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerName="extract-utilities" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.199666 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a6c822-5a51-40d3-9775-a0c0f5d24874" containerName="registry-server" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.201112 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.215577 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t"] Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.230235 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.230760 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.253176 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-secret-volume\") pod \"collect-profiles-29409375-9666t\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.253226 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxmbp\" (UniqueName: \"kubernetes.io/projected/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-kube-api-access-sxmbp\") pod \"collect-profiles-29409375-9666t\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.253297 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-config-volume\") pod \"collect-profiles-29409375-9666t\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.354907 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-config-volume\") pod \"collect-profiles-29409375-9666t\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.355042 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-secret-volume\") pod \"collect-profiles-29409375-9666t\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.355065 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxmbp\" (UniqueName: \"kubernetes.io/projected/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-kube-api-access-sxmbp\") pod \"collect-profiles-29409375-9666t\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.356374 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-config-volume\") pod \"collect-profiles-29409375-9666t\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.362286 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-secret-volume\") pod \"collect-profiles-29409375-9666t\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.371667 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxmbp\" (UniqueName: \"kubernetes.io/projected/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-kube-api-access-sxmbp\") pod \"collect-profiles-29409375-9666t\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:00 crc kubenswrapper[4880]: I1201 04:15:00.540562 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:01 crc kubenswrapper[4880]: I1201 04:15:01.099075 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t"] Dec 01 04:15:01 crc kubenswrapper[4880]: I1201 04:15:01.385431 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" event={"ID":"093c5eb6-5fc7-4bd2-8483-16dd812da6b5","Type":"ContainerStarted","Data":"e317f322db4e1e7d2f245346fae56a9129b1db56324632e4ce68f2e5ccc11d2a"} Dec 01 04:15:01 crc kubenswrapper[4880]: I1201 04:15:01.385724 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" event={"ID":"093c5eb6-5fc7-4bd2-8483-16dd812da6b5","Type":"ContainerStarted","Data":"86068fa33125e91f290430dbed5fc4d14d34b331bcb511efaede5710178c2e04"} Dec 01 04:15:01 crc kubenswrapper[4880]: I1201 04:15:01.405253 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" podStartSLOduration=1.405233069 podStartE2EDuration="1.405233069s" podCreationTimestamp="2025-12-01 04:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 04:15:01.398392712 +0000 UTC m=+4730.909647104" watchObservedRunningTime="2025-12-01 04:15:01.405233069 +0000 UTC m=+4730.916487441" Dec 01 04:15:02 crc kubenswrapper[4880]: I1201 04:15:02.398022 4880 generic.go:334] "Generic (PLEG): container finished" podID="093c5eb6-5fc7-4bd2-8483-16dd812da6b5" containerID="e317f322db4e1e7d2f245346fae56a9129b1db56324632e4ce68f2e5ccc11d2a" exitCode=0 Dec 01 04:15:02 crc kubenswrapper[4880]: I1201 04:15:02.398107 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" event={"ID":"093c5eb6-5fc7-4bd2-8483-16dd812da6b5","Type":"ContainerDied","Data":"e317f322db4e1e7d2f245346fae56a9129b1db56324632e4ce68f2e5ccc11d2a"} Dec 01 04:15:03 crc kubenswrapper[4880]: I1201 04:15:03.842132 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:03 crc kubenswrapper[4880]: I1201 04:15:03.925521 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-config-volume\") pod \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " Dec 01 04:15:03 crc kubenswrapper[4880]: I1201 04:15:03.925729 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-secret-volume\") pod \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " Dec 01 04:15:03 crc kubenswrapper[4880]: I1201 04:15:03.925940 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxmbp\" (UniqueName: \"kubernetes.io/projected/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-kube-api-access-sxmbp\") pod \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\" (UID: \"093c5eb6-5fc7-4bd2-8483-16dd812da6b5\") " Dec 01 04:15:03 crc kubenswrapper[4880]: I1201 04:15:03.926247 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "093c5eb6-5fc7-4bd2-8483-16dd812da6b5" (UID: "093c5eb6-5fc7-4bd2-8483-16dd812da6b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 04:15:03 crc kubenswrapper[4880]: I1201 04:15:03.926958 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 04:15:03 crc kubenswrapper[4880]: I1201 04:15:03.939130 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "093c5eb6-5fc7-4bd2-8483-16dd812da6b5" (UID: "093c5eb6-5fc7-4bd2-8483-16dd812da6b5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:15:03 crc kubenswrapper[4880]: I1201 04:15:03.947078 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-kube-api-access-sxmbp" (OuterVolumeSpecName: "kube-api-access-sxmbp") pod "093c5eb6-5fc7-4bd2-8483-16dd812da6b5" (UID: "093c5eb6-5fc7-4bd2-8483-16dd812da6b5"). InnerVolumeSpecName "kube-api-access-sxmbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:15:04 crc kubenswrapper[4880]: I1201 04:15:04.029049 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 04:15:04 crc kubenswrapper[4880]: I1201 04:15:04.029101 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxmbp\" (UniqueName: \"kubernetes.io/projected/093c5eb6-5fc7-4bd2-8483-16dd812da6b5-kube-api-access-sxmbp\") on node \"crc\" DevicePath \"\"" Dec 01 04:15:04 crc kubenswrapper[4880]: I1201 04:15:04.417207 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" event={"ID":"093c5eb6-5fc7-4bd2-8483-16dd812da6b5","Type":"ContainerDied","Data":"86068fa33125e91f290430dbed5fc4d14d34b331bcb511efaede5710178c2e04"} Dec 01 04:15:04 crc kubenswrapper[4880]: I1201 04:15:04.417635 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86068fa33125e91f290430dbed5fc4d14d34b331bcb511efaede5710178c2e04" Dec 01 04:15:04 crc kubenswrapper[4880]: I1201 04:15:04.417270 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t" Dec 01 04:15:04 crc kubenswrapper[4880]: I1201 04:15:04.493613 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9"] Dec 01 04:15:04 crc kubenswrapper[4880]: I1201 04:15:04.502508 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409330-n9cw9"] Dec 01 04:15:04 crc kubenswrapper[4880]: I1201 04:15:04.794809 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a53464-da08-48b9-83fd-5f68c6dfe562" path="/var/lib/kubelet/pods/44a53464-da08-48b9-83fd-5f68c6dfe562/volumes" Dec 01 04:15:17 crc kubenswrapper[4880]: I1201 04:15:17.369570 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:15:17 crc kubenswrapper[4880]: I1201 04:15:17.370436 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:15:43 crc kubenswrapper[4880]: I1201 04:15:43.533483 4880 scope.go:117] "RemoveContainer" containerID="d8a2eb40a2ccf6ca09b9c5039d5d3e1582f1ee31b28113cdd56b55250e7a0e8e" Dec 01 04:15:47 crc kubenswrapper[4880]: I1201 04:15:47.369369 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:15:47 crc kubenswrapper[4880]: I1201 04:15:47.370004 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:16:17 crc kubenswrapper[4880]: I1201 04:16:17.369065 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:16:17 crc kubenswrapper[4880]: I1201 04:16:17.369487 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:16:17 crc kubenswrapper[4880]: I1201 04:16:17.369535 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:16:17 crc kubenswrapper[4880]: I1201 04:16:17.370486 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"997baa5b4ed451ba7c78e8285dcf8e3d295f8f8975ddcdd68231c8c4f4eed108"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:16:17 crc kubenswrapper[4880]: I1201 04:16:17.370565 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://997baa5b4ed451ba7c78e8285dcf8e3d295f8f8975ddcdd68231c8c4f4eed108" gracePeriod=600 Dec 01 04:16:18 crc kubenswrapper[4880]: I1201 04:16:18.207525 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="997baa5b4ed451ba7c78e8285dcf8e3d295f8f8975ddcdd68231c8c4f4eed108" exitCode=0 Dec 01 04:16:18 crc kubenswrapper[4880]: I1201 04:16:18.207701 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"997baa5b4ed451ba7c78e8285dcf8e3d295f8f8975ddcdd68231c8c4f4eed108"} Dec 01 04:16:18 crc kubenswrapper[4880]: I1201 04:16:18.208204 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8"} Dec 01 04:16:18 crc kubenswrapper[4880]: I1201 04:16:18.208225 4880 scope.go:117] "RemoveContainer" containerID="17a5c5d6243e388272cd38c5a50583a30f1380712d5f795fb4b05766c2c2a551" Dec 01 04:16:38 crc kubenswrapper[4880]: I1201 04:16:38.913497 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vgm79"] Dec 01 04:16:38 crc kubenswrapper[4880]: E1201 04:16:38.914587 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="093c5eb6-5fc7-4bd2-8483-16dd812da6b5" containerName="collect-profiles" Dec 01 04:16:38 crc kubenswrapper[4880]: I1201 04:16:38.914606 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="093c5eb6-5fc7-4bd2-8483-16dd812da6b5" containerName="collect-profiles" Dec 01 04:16:38 crc kubenswrapper[4880]: I1201 04:16:38.914865 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="093c5eb6-5fc7-4bd2-8483-16dd812da6b5" containerName="collect-profiles" Dec 01 04:16:38 crc kubenswrapper[4880]: I1201 04:16:38.918368 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:38 crc kubenswrapper[4880]: I1201 04:16:38.955481 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vgm79"] Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.055562 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28t5b\" (UniqueName: \"kubernetes.io/projected/7443e04f-5d38-443a-a91e-73faf7c4ac76-kube-api-access-28t5b\") pod \"community-operators-vgm79\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.055718 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-utilities\") pod \"community-operators-vgm79\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.055862 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-catalog-content\") pod \"community-operators-vgm79\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.157432 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-catalog-content\") pod \"community-operators-vgm79\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.157498 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28t5b\" (UniqueName: \"kubernetes.io/projected/7443e04f-5d38-443a-a91e-73faf7c4ac76-kube-api-access-28t5b\") pod \"community-operators-vgm79\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.157575 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-utilities\") pod \"community-operators-vgm79\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.158196 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-utilities\") pod \"community-operators-vgm79\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.158197 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-catalog-content\") pod \"community-operators-vgm79\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.194106 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28t5b\" (UniqueName: \"kubernetes.io/projected/7443e04f-5d38-443a-a91e-73faf7c4ac76-kube-api-access-28t5b\") pod \"community-operators-vgm79\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.261438 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:39 crc kubenswrapper[4880]: I1201 04:16:39.932760 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vgm79"] Dec 01 04:16:40 crc kubenswrapper[4880]: I1201 04:16:40.488666 4880 generic.go:334] "Generic (PLEG): container finished" podID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerID="99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3" exitCode=0 Dec 01 04:16:40 crc kubenswrapper[4880]: I1201 04:16:40.488748 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgm79" event={"ID":"7443e04f-5d38-443a-a91e-73faf7c4ac76","Type":"ContainerDied","Data":"99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3"} Dec 01 04:16:40 crc kubenswrapper[4880]: I1201 04:16:40.489220 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgm79" event={"ID":"7443e04f-5d38-443a-a91e-73faf7c4ac76","Type":"ContainerStarted","Data":"726214a941833d233f512e6bb019c2eb45132997ac400bd12e797e2f0f02e83d"} Dec 01 04:16:40 crc kubenswrapper[4880]: I1201 04:16:40.492277 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 04:16:42 crc kubenswrapper[4880]: I1201 04:16:42.511570 4880 generic.go:334] "Generic (PLEG): container finished" podID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerID="76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200" exitCode=0 Dec 01 04:16:42 crc kubenswrapper[4880]: I1201 04:16:42.511680 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgm79" event={"ID":"7443e04f-5d38-443a-a91e-73faf7c4ac76","Type":"ContainerDied","Data":"76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200"} Dec 01 04:16:43 crc kubenswrapper[4880]: I1201 04:16:43.523012 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgm79" event={"ID":"7443e04f-5d38-443a-a91e-73faf7c4ac76","Type":"ContainerStarted","Data":"a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102"} Dec 01 04:16:43 crc kubenswrapper[4880]: I1201 04:16:43.549026 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vgm79" podStartSLOduration=3.015259158 podStartE2EDuration="5.549005969s" podCreationTimestamp="2025-12-01 04:16:38 +0000 UTC" firstStartedPulling="2025-12-01 04:16:40.491517165 +0000 UTC m=+4830.002771537" lastFinishedPulling="2025-12-01 04:16:43.025263946 +0000 UTC m=+4832.536518348" observedRunningTime="2025-12-01 04:16:43.540916062 +0000 UTC m=+4833.052170464" watchObservedRunningTime="2025-12-01 04:16:43.549005969 +0000 UTC m=+4833.060260361" Dec 01 04:16:49 crc kubenswrapper[4880]: I1201 04:16:49.261572 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:49 crc kubenswrapper[4880]: I1201 04:16:49.262513 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:49 crc kubenswrapper[4880]: I1201 04:16:49.882638 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:49 crc kubenswrapper[4880]: I1201 04:16:49.936586 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:50 crc kubenswrapper[4880]: I1201 04:16:50.116081 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vgm79"] Dec 01 04:16:51 crc kubenswrapper[4880]: I1201 04:16:51.613458 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vgm79" podUID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerName="registry-server" containerID="cri-o://a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102" gracePeriod=2 Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.467807 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.522476 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28t5b\" (UniqueName: \"kubernetes.io/projected/7443e04f-5d38-443a-a91e-73faf7c4ac76-kube-api-access-28t5b\") pod \"7443e04f-5d38-443a-a91e-73faf7c4ac76\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.522614 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-utilities\") pod \"7443e04f-5d38-443a-a91e-73faf7c4ac76\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.522671 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-catalog-content\") pod \"7443e04f-5d38-443a-a91e-73faf7c4ac76\" (UID: \"7443e04f-5d38-443a-a91e-73faf7c4ac76\") " Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.523539 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-utilities" (OuterVolumeSpecName: "utilities") pod "7443e04f-5d38-443a-a91e-73faf7c4ac76" (UID: "7443e04f-5d38-443a-a91e-73faf7c4ac76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.530158 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7443e04f-5d38-443a-a91e-73faf7c4ac76-kube-api-access-28t5b" (OuterVolumeSpecName: "kube-api-access-28t5b") pod "7443e04f-5d38-443a-a91e-73faf7c4ac76" (UID: "7443e04f-5d38-443a-a91e-73faf7c4ac76"). InnerVolumeSpecName "kube-api-access-28t5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.581690 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7443e04f-5d38-443a-a91e-73faf7c4ac76" (UID: "7443e04f-5d38-443a-a91e-73faf7c4ac76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.624296 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.624353 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7443e04f-5d38-443a-a91e-73faf7c4ac76-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.624363 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28t5b\" (UniqueName: \"kubernetes.io/projected/7443e04f-5d38-443a-a91e-73faf7c4ac76-kube-api-access-28t5b\") on node \"crc\" DevicePath \"\"" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.625421 4880 generic.go:334] "Generic (PLEG): container finished" podID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerID="a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102" exitCode=0 Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.625448 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgm79" event={"ID":"7443e04f-5d38-443a-a91e-73faf7c4ac76","Type":"ContainerDied","Data":"a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102"} Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.625470 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgm79" event={"ID":"7443e04f-5d38-443a-a91e-73faf7c4ac76","Type":"ContainerDied","Data":"726214a941833d233f512e6bb019c2eb45132997ac400bd12e797e2f0f02e83d"} Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.625486 4880 scope.go:117] "RemoveContainer" containerID="a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.625587 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vgm79" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.648156 4880 scope.go:117] "RemoveContainer" containerID="76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.667175 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vgm79"] Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.674772 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vgm79"] Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.688089 4880 scope.go:117] "RemoveContainer" containerID="99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.725709 4880 scope.go:117] "RemoveContainer" containerID="a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102" Dec 01 04:16:52 crc kubenswrapper[4880]: E1201 04:16:52.726096 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102\": container with ID starting with a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102 not found: ID does not exist" containerID="a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.726127 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102"} err="failed to get container status \"a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102\": rpc error: code = NotFound desc = could not find container \"a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102\": container with ID starting with a144d76e26526abaddf6ee4c790ad446753815f2964b185b0e7334cf147b0102 not found: ID does not exist" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.726147 4880 scope.go:117] "RemoveContainer" containerID="76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200" Dec 01 04:16:52 crc kubenswrapper[4880]: E1201 04:16:52.726354 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200\": container with ID starting with 76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200 not found: ID does not exist" containerID="76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.726378 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200"} err="failed to get container status \"76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200\": rpc error: code = NotFound desc = could not find container \"76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200\": container with ID starting with 76b3b5297ae62cc0e86effc8b3dbfe141ade48ff917a1a8b160a82083bdfd200 not found: ID does not exist" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.726393 4880 scope.go:117] "RemoveContainer" containerID="99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3" Dec 01 04:16:52 crc kubenswrapper[4880]: E1201 04:16:52.726540 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3\": container with ID starting with 99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3 not found: ID does not exist" containerID="99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.726561 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3"} err="failed to get container status \"99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3\": rpc error: code = NotFound desc = could not find container \"99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3\": container with ID starting with 99b8cb6043fe47bfd280c5ea0402c696fbf9df81201737283923b69053c6f7c3 not found: ID does not exist" Dec 01 04:16:52 crc kubenswrapper[4880]: I1201 04:16:52.816730 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7443e04f-5d38-443a-a91e-73faf7c4ac76" path="/var/lib/kubelet/pods/7443e04f-5d38-443a-a91e-73faf7c4ac76/volumes" Dec 01 04:18:17 crc kubenswrapper[4880]: I1201 04:18:17.370000 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:18:17 crc kubenswrapper[4880]: I1201 04:18:17.370714 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:18:47 crc kubenswrapper[4880]: I1201 04:18:47.369366 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:18:47 crc kubenswrapper[4880]: I1201 04:18:47.370086 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:19:17 crc kubenswrapper[4880]: I1201 04:19:17.368611 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:19:17 crc kubenswrapper[4880]: I1201 04:19:17.369295 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:19:17 crc kubenswrapper[4880]: I1201 04:19:17.369352 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:19:17 crc kubenswrapper[4880]: I1201 04:19:17.370212 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:19:17 crc kubenswrapper[4880]: I1201 04:19:17.370280 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" gracePeriod=600 Dec 01 04:19:18 crc kubenswrapper[4880]: I1201 04:19:18.352837 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" exitCode=0 Dec 01 04:19:18 crc kubenswrapper[4880]: I1201 04:19:18.352902 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8"} Dec 01 04:19:18 crc kubenswrapper[4880]: I1201 04:19:18.353607 4880 scope.go:117] "RemoveContainer" containerID="997baa5b4ed451ba7c78e8285dcf8e3d295f8f8975ddcdd68231c8c4f4eed108" Dec 01 04:19:18 crc kubenswrapper[4880]: E1201 04:19:18.514108 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:19:19 crc kubenswrapper[4880]: I1201 04:19:19.368263 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:19:19 crc kubenswrapper[4880]: E1201 04:19:19.368882 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:19:31 crc kubenswrapper[4880]: I1201 04:19:31.787769 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:19:31 crc kubenswrapper[4880]: E1201 04:19:31.789041 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:19:42 crc kubenswrapper[4880]: I1201 04:19:42.784464 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:19:42 crc kubenswrapper[4880]: E1201 04:19:42.785577 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:19:54 crc kubenswrapper[4880]: I1201 04:19:54.784318 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:19:54 crc kubenswrapper[4880]: E1201 04:19:54.784943 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:20:09 crc kubenswrapper[4880]: I1201 04:20:09.784418 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:20:09 crc kubenswrapper[4880]: E1201 04:20:09.785205 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:20:24 crc kubenswrapper[4880]: I1201 04:20:24.784529 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:20:24 crc kubenswrapper[4880]: E1201 04:20:24.785558 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:20:39 crc kubenswrapper[4880]: I1201 04:20:39.783973 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:20:39 crc kubenswrapper[4880]: E1201 04:20:39.784750 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:20:52 crc kubenswrapper[4880]: I1201 04:20:52.784525 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:20:52 crc kubenswrapper[4880]: E1201 04:20:52.786444 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:21:07 crc kubenswrapper[4880]: I1201 04:21:07.785127 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:21:07 crc kubenswrapper[4880]: E1201 04:21:07.785894 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:21:22 crc kubenswrapper[4880]: I1201 04:21:22.784259 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:21:22 crc kubenswrapper[4880]: E1201 04:21:22.785019 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.326921 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-657jn"] Dec 01 04:21:34 crc kubenswrapper[4880]: E1201 04:21:34.328006 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerName="extract-content" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.328023 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerName="extract-content" Dec 01 04:21:34 crc kubenswrapper[4880]: E1201 04:21:34.328038 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerName="registry-server" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.328047 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerName="registry-server" Dec 01 04:21:34 crc kubenswrapper[4880]: E1201 04:21:34.328091 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerName="extract-utilities" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.328099 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerName="extract-utilities" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.328330 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="7443e04f-5d38-443a-a91e-73faf7c4ac76" containerName="registry-server" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.330153 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.345429 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-657jn"] Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.463953 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-catalog-content\") pod \"redhat-operators-657jn\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.464398 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-utilities\") pod \"redhat-operators-657jn\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.464684 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vwcv\" (UniqueName: \"kubernetes.io/projected/16e6e2a3-0696-42c8-af85-deaae4b79884-kube-api-access-9vwcv\") pod \"redhat-operators-657jn\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.569262 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-catalog-content\") pod \"redhat-operators-657jn\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.569358 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-utilities\") pod \"redhat-operators-657jn\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.569448 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vwcv\" (UniqueName: \"kubernetes.io/projected/16e6e2a3-0696-42c8-af85-deaae4b79884-kube-api-access-9vwcv\") pod \"redhat-operators-657jn\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.569708 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-catalog-content\") pod \"redhat-operators-657jn\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.570082 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-utilities\") pod \"redhat-operators-657jn\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.589736 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vwcv\" (UniqueName: \"kubernetes.io/projected/16e6e2a3-0696-42c8-af85-deaae4b79884-kube-api-access-9vwcv\") pod \"redhat-operators-657jn\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:34 crc kubenswrapper[4880]: I1201 04:21:34.664347 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:35 crc kubenswrapper[4880]: I1201 04:21:35.174602 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-657jn"] Dec 01 04:21:35 crc kubenswrapper[4880]: I1201 04:21:35.787424 4880 generic.go:334] "Generic (PLEG): container finished" podID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerID="1b1a8464a37233b8b8b457dd94723f962b93b9e5d90eb85895ee8f5c7b90aac5" exitCode=0 Dec 01 04:21:35 crc kubenswrapper[4880]: I1201 04:21:35.787571 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-657jn" event={"ID":"16e6e2a3-0696-42c8-af85-deaae4b79884","Type":"ContainerDied","Data":"1b1a8464a37233b8b8b457dd94723f962b93b9e5d90eb85895ee8f5c7b90aac5"} Dec 01 04:21:35 crc kubenswrapper[4880]: I1201 04:21:35.788615 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-657jn" event={"ID":"16e6e2a3-0696-42c8-af85-deaae4b79884","Type":"ContainerStarted","Data":"75c6fc3c3372fa900767f859aca4e168eb6183b409f747dedcad62ec1cc9fb54"} Dec 01 04:21:36 crc kubenswrapper[4880]: I1201 04:21:36.801350 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-657jn" event={"ID":"16e6e2a3-0696-42c8-af85-deaae4b79884","Type":"ContainerStarted","Data":"b2e6dcc0e38d1be96495b05aea49fd63ded1d6ca1995c9ed8c098a5119218750"} Dec 01 04:21:37 crc kubenswrapper[4880]: I1201 04:21:37.783848 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:21:37 crc kubenswrapper[4880]: E1201 04:21:37.784324 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.675936 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9t4q5"] Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.683494 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.692194 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t4q5"] Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.784050 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-utilities\") pod \"redhat-marketplace-9t4q5\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.784134 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-catalog-content\") pod \"redhat-marketplace-9t4q5\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.784405 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5kj5\" (UniqueName: \"kubernetes.io/projected/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-kube-api-access-f5kj5\") pod \"redhat-marketplace-9t4q5\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.831279 4880 generic.go:334] "Generic (PLEG): container finished" podID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerID="b2e6dcc0e38d1be96495b05aea49fd63ded1d6ca1995c9ed8c098a5119218750" exitCode=0 Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.831323 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-657jn" event={"ID":"16e6e2a3-0696-42c8-af85-deaae4b79884","Type":"ContainerDied","Data":"b2e6dcc0e38d1be96495b05aea49fd63ded1d6ca1995c9ed8c098a5119218750"} Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.886248 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-utilities\") pod \"redhat-marketplace-9t4q5\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.886377 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-catalog-content\") pod \"redhat-marketplace-9t4q5\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.886663 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5kj5\" (UniqueName: \"kubernetes.io/projected/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-kube-api-access-f5kj5\") pod \"redhat-marketplace-9t4q5\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.887848 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-utilities\") pod \"redhat-marketplace-9t4q5\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.888416 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-catalog-content\") pod \"redhat-marketplace-9t4q5\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:39 crc kubenswrapper[4880]: I1201 04:21:39.908138 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5kj5\" (UniqueName: \"kubernetes.io/projected/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-kube-api-access-f5kj5\") pod \"redhat-marketplace-9t4q5\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:40 crc kubenswrapper[4880]: I1201 04:21:40.025453 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:40 crc kubenswrapper[4880]: I1201 04:21:40.586771 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t4q5"] Dec 01 04:21:40 crc kubenswrapper[4880]: W1201 04:21:40.588336 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b08b4ff_09c7_4607_b45c_3cc00c2d53c8.slice/crio-22239347a6e9344807b0280ec61ff864729104238a6aed6c91b5134eba0cc0f4 WatchSource:0}: Error finding container 22239347a6e9344807b0280ec61ff864729104238a6aed6c91b5134eba0cc0f4: Status 404 returned error can't find the container with id 22239347a6e9344807b0280ec61ff864729104238a6aed6c91b5134eba0cc0f4 Dec 01 04:21:40 crc kubenswrapper[4880]: I1201 04:21:40.840648 4880 generic.go:334] "Generic (PLEG): container finished" podID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerID="e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653" exitCode=0 Dec 01 04:21:40 crc kubenswrapper[4880]: I1201 04:21:40.840750 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t4q5" event={"ID":"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8","Type":"ContainerDied","Data":"e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653"} Dec 01 04:21:40 crc kubenswrapper[4880]: I1201 04:21:40.840798 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t4q5" event={"ID":"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8","Type":"ContainerStarted","Data":"22239347a6e9344807b0280ec61ff864729104238a6aed6c91b5134eba0cc0f4"} Dec 01 04:21:40 crc kubenswrapper[4880]: I1201 04:21:40.842936 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 04:21:40 crc kubenswrapper[4880]: I1201 04:21:40.844379 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-657jn" event={"ID":"16e6e2a3-0696-42c8-af85-deaae4b79884","Type":"ContainerStarted","Data":"89132bd2e4734549f527e943496045244d6bfd11422d9dbd249254f4066bc7cf"} Dec 01 04:21:41 crc kubenswrapper[4880]: I1201 04:21:41.855499 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t4q5" event={"ID":"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8","Type":"ContainerStarted","Data":"ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a"} Dec 01 04:21:41 crc kubenswrapper[4880]: I1201 04:21:41.883687 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-657jn" podStartSLOduration=3.281993661 podStartE2EDuration="7.883665561s" podCreationTimestamp="2025-12-01 04:21:34 +0000 UTC" firstStartedPulling="2025-12-01 04:21:35.789822021 +0000 UTC m=+5125.301076393" lastFinishedPulling="2025-12-01 04:21:40.391493921 +0000 UTC m=+5129.902748293" observedRunningTime="2025-12-01 04:21:40.87730298 +0000 UTC m=+5130.388557352" watchObservedRunningTime="2025-12-01 04:21:41.883665561 +0000 UTC m=+5131.394919943" Dec 01 04:21:42 crc kubenswrapper[4880]: I1201 04:21:42.866516 4880 generic.go:334] "Generic (PLEG): container finished" podID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerID="ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a" exitCode=0 Dec 01 04:21:42 crc kubenswrapper[4880]: I1201 04:21:42.866561 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t4q5" event={"ID":"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8","Type":"ContainerDied","Data":"ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a"} Dec 01 04:21:43 crc kubenswrapper[4880]: I1201 04:21:43.879253 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t4q5" event={"ID":"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8","Type":"ContainerStarted","Data":"8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a"} Dec 01 04:21:43 crc kubenswrapper[4880]: I1201 04:21:43.909257 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9t4q5" podStartSLOduration=2.327639589 podStartE2EDuration="4.909238177s" podCreationTimestamp="2025-12-01 04:21:39 +0000 UTC" firstStartedPulling="2025-12-01 04:21:40.842683311 +0000 UTC m=+5130.353937683" lastFinishedPulling="2025-12-01 04:21:43.424281889 +0000 UTC m=+5132.935536271" observedRunningTime="2025-12-01 04:21:43.903148598 +0000 UTC m=+5133.414402980" watchObservedRunningTime="2025-12-01 04:21:43.909238177 +0000 UTC m=+5133.420492549" Dec 01 04:21:44 crc kubenswrapper[4880]: I1201 04:21:44.665103 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:44 crc kubenswrapper[4880]: I1201 04:21:44.665811 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:21:45 crc kubenswrapper[4880]: I1201 04:21:45.722505 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-657jn" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="registry-server" probeResult="failure" output=< Dec 01 04:21:45 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:21:45 crc kubenswrapper[4880]: > Dec 01 04:21:49 crc kubenswrapper[4880]: I1201 04:21:49.783982 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:21:49 crc kubenswrapper[4880]: E1201 04:21:49.784590 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:21:50 crc kubenswrapper[4880]: I1201 04:21:50.026459 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:50 crc kubenswrapper[4880]: I1201 04:21:50.026664 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:50 crc kubenswrapper[4880]: I1201 04:21:50.079049 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:51 crc kubenswrapper[4880]: I1201 04:21:51.014487 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:51 crc kubenswrapper[4880]: I1201 04:21:51.078249 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t4q5"] Dec 01 04:21:52 crc kubenswrapper[4880]: I1201 04:21:52.970829 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9t4q5" podUID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerName="registry-server" containerID="cri-o://8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a" gracePeriod=2 Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.689771 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.858674 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-catalog-content\") pod \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.858817 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5kj5\" (UniqueName: \"kubernetes.io/projected/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-kube-api-access-f5kj5\") pod \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.858925 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-utilities\") pod \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\" (UID: \"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8\") " Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.859841 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-utilities" (OuterVolumeSpecName: "utilities") pod "2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" (UID: "2b08b4ff-09c7-4607-b45c-3cc00c2d53c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.865051 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-kube-api-access-f5kj5" (OuterVolumeSpecName: "kube-api-access-f5kj5") pod "2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" (UID: "2b08b4ff-09c7-4607-b45c-3cc00c2d53c8"). InnerVolumeSpecName "kube-api-access-f5kj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.877362 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" (UID: "2b08b4ff-09c7-4607-b45c-3cc00c2d53c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.961372 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.961420 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5kj5\" (UniqueName: \"kubernetes.io/projected/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-kube-api-access-f5kj5\") on node \"crc\" DevicePath \"\"" Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.961432 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.981290 4880 generic.go:334] "Generic (PLEG): container finished" podID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerID="8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a" exitCode=0 Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.981334 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t4q5" event={"ID":"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8","Type":"ContainerDied","Data":"8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a"} Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.981364 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t4q5" event={"ID":"2b08b4ff-09c7-4607-b45c-3cc00c2d53c8","Type":"ContainerDied","Data":"22239347a6e9344807b0280ec61ff864729104238a6aed6c91b5134eba0cc0f4"} Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.981362 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t4q5" Dec 01 04:21:53 crc kubenswrapper[4880]: I1201 04:21:53.981382 4880 scope.go:117] "RemoveContainer" containerID="8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a" Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.003510 4880 scope.go:117] "RemoveContainer" containerID="ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a" Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.036808 4880 scope.go:117] "RemoveContainer" containerID="e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653" Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.039468 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t4q5"] Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.053124 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t4q5"] Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.097071 4880 scope.go:117] "RemoveContainer" containerID="8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a" Dec 01 04:21:54 crc kubenswrapper[4880]: E1201 04:21:54.097745 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a\": container with ID starting with 8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a not found: ID does not exist" containerID="8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a" Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.097808 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a"} err="failed to get container status \"8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a\": rpc error: code = NotFound desc = could not find container \"8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a\": container with ID starting with 8faf60e21fbb98ae72287475e225c69295b72d709bf67a319d61927e0cd2740a not found: ID does not exist" Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.097836 4880 scope.go:117] "RemoveContainer" containerID="ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a" Dec 01 04:21:54 crc kubenswrapper[4880]: E1201 04:21:54.098092 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a\": container with ID starting with ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a not found: ID does not exist" containerID="ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a" Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.098157 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a"} err="failed to get container status \"ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a\": rpc error: code = NotFound desc = could not find container \"ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a\": container with ID starting with ecda4af195066e5a51e5fe311ca2e71eff1895b8e12e094cc8100e9bccb1022a not found: ID does not exist" Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.098289 4880 scope.go:117] "RemoveContainer" containerID="e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653" Dec 01 04:21:54 crc kubenswrapper[4880]: E1201 04:21:54.098616 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653\": container with ID starting with e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653 not found: ID does not exist" containerID="e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653" Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.098663 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653"} err="failed to get container status \"e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653\": rpc error: code = NotFound desc = could not find container \"e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653\": container with ID starting with e31e11d194656239c0beb49204f8b8db87f948bd5852fa89be29872605c22653 not found: ID does not exist" Dec 01 04:21:54 crc kubenswrapper[4880]: I1201 04:21:54.804752 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" path="/var/lib/kubelet/pods/2b08b4ff-09c7-4607-b45c-3cc00c2d53c8/volumes" Dec 01 04:21:55 crc kubenswrapper[4880]: I1201 04:21:55.715113 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-657jn" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="registry-server" probeResult="failure" output=< Dec 01 04:21:55 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:21:55 crc kubenswrapper[4880]: > Dec 01 04:22:04 crc kubenswrapper[4880]: I1201 04:22:04.715145 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:22:04 crc kubenswrapper[4880]: I1201 04:22:04.762835 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:22:04 crc kubenswrapper[4880]: I1201 04:22:04.786089 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:22:04 crc kubenswrapper[4880]: E1201 04:22:04.786363 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:22:05 crc kubenswrapper[4880]: I1201 04:22:05.513535 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-657jn"] Dec 01 04:22:06 crc kubenswrapper[4880]: I1201 04:22:06.095097 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-657jn" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="registry-server" containerID="cri-o://89132bd2e4734549f527e943496045244d6bfd11422d9dbd249254f4066bc7cf" gracePeriod=2 Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.105782 4880 generic.go:334] "Generic (PLEG): container finished" podID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerID="89132bd2e4734549f527e943496045244d6bfd11422d9dbd249254f4066bc7cf" exitCode=0 Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.105880 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-657jn" event={"ID":"16e6e2a3-0696-42c8-af85-deaae4b79884","Type":"ContainerDied","Data":"89132bd2e4734549f527e943496045244d6bfd11422d9dbd249254f4066bc7cf"} Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.106133 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-657jn" event={"ID":"16e6e2a3-0696-42c8-af85-deaae4b79884","Type":"ContainerDied","Data":"75c6fc3c3372fa900767f859aca4e168eb6183b409f747dedcad62ec1cc9fb54"} Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.106150 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75c6fc3c3372fa900767f859aca4e168eb6183b409f747dedcad62ec1cc9fb54" Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.108240 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.227806 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-utilities\") pod \"16e6e2a3-0696-42c8-af85-deaae4b79884\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.227889 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vwcv\" (UniqueName: \"kubernetes.io/projected/16e6e2a3-0696-42c8-af85-deaae4b79884-kube-api-access-9vwcv\") pod \"16e6e2a3-0696-42c8-af85-deaae4b79884\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.228182 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-catalog-content\") pod \"16e6e2a3-0696-42c8-af85-deaae4b79884\" (UID: \"16e6e2a3-0696-42c8-af85-deaae4b79884\") " Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.230109 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-utilities" (OuterVolumeSpecName: "utilities") pod "16e6e2a3-0696-42c8-af85-deaae4b79884" (UID: "16e6e2a3-0696-42c8-af85-deaae4b79884"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.235418 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e6e2a3-0696-42c8-af85-deaae4b79884-kube-api-access-9vwcv" (OuterVolumeSpecName: "kube-api-access-9vwcv") pod "16e6e2a3-0696-42c8-af85-deaae4b79884" (UID: "16e6e2a3-0696-42c8-af85-deaae4b79884"). InnerVolumeSpecName "kube-api-access-9vwcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.330149 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16e6e2a3-0696-42c8-af85-deaae4b79884" (UID: "16e6e2a3-0696-42c8-af85-deaae4b79884"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.330801 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vwcv\" (UniqueName: \"kubernetes.io/projected/16e6e2a3-0696-42c8-af85-deaae4b79884-kube-api-access-9vwcv\") on node \"crc\" DevicePath \"\"" Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.330828 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:22:07 crc kubenswrapper[4880]: I1201 04:22:07.330838 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16e6e2a3-0696-42c8-af85-deaae4b79884-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:22:08 crc kubenswrapper[4880]: I1201 04:22:08.114334 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-657jn" Dec 01 04:22:08 crc kubenswrapper[4880]: I1201 04:22:08.151767 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-657jn"] Dec 01 04:22:08 crc kubenswrapper[4880]: I1201 04:22:08.160028 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-657jn"] Dec 01 04:22:08 crc kubenswrapper[4880]: I1201 04:22:08.799034 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" path="/var/lib/kubelet/pods/16e6e2a3-0696-42c8-af85-deaae4b79884/volumes" Dec 01 04:22:15 crc kubenswrapper[4880]: I1201 04:22:15.783994 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:22:15 crc kubenswrapper[4880]: E1201 04:22:15.784780 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:22:28 crc kubenswrapper[4880]: I1201 04:22:28.785325 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:22:28 crc kubenswrapper[4880]: E1201 04:22:28.788196 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:22:39 crc kubenswrapper[4880]: I1201 04:22:39.784144 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:22:39 crc kubenswrapper[4880]: E1201 04:22:39.785041 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:22:50 crc kubenswrapper[4880]: I1201 04:22:50.786378 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:22:50 crc kubenswrapper[4880]: E1201 04:22:50.788412 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:23:02 crc kubenswrapper[4880]: I1201 04:23:02.784727 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:23:02 crc kubenswrapper[4880]: E1201 04:23:02.785768 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:23:15 crc kubenswrapper[4880]: I1201 04:23:15.785308 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:23:15 crc kubenswrapper[4880]: E1201 04:23:15.786605 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:23:30 crc kubenswrapper[4880]: I1201 04:23:30.803927 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:23:30 crc kubenswrapper[4880]: E1201 04:23:30.805588 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:23:43 crc kubenswrapper[4880]: I1201 04:23:43.783691 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:23:43 crc kubenswrapper[4880]: E1201 04:23:43.784927 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:23:57 crc kubenswrapper[4880]: I1201 04:23:57.784834 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:23:57 crc kubenswrapper[4880]: E1201 04:23:57.785644 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:24:08 crc kubenswrapper[4880]: I1201 04:24:08.784209 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:24:08 crc kubenswrapper[4880]: E1201 04:24:08.785511 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:24:20 crc kubenswrapper[4880]: I1201 04:24:20.805060 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:24:21 crc kubenswrapper[4880]: I1201 04:24:21.607605 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"fd58a34c72b46c115c5a2ffd467b9e557213b09163cc6879a8bc7a510df98b1e"} Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.229016 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zl5rb"] Dec 01 04:25:20 crc kubenswrapper[4880]: E1201 04:25:20.261573 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="extract-content" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.261839 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="extract-content" Dec 01 04:25:20 crc kubenswrapper[4880]: E1201 04:25:20.261992 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="registry-server" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.262071 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="registry-server" Dec 01 04:25:20 crc kubenswrapper[4880]: E1201 04:25:20.262200 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerName="extract-utilities" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.262321 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerName="extract-utilities" Dec 01 04:25:20 crc kubenswrapper[4880]: E1201 04:25:20.262482 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="extract-utilities" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.262586 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="extract-utilities" Dec 01 04:25:20 crc kubenswrapper[4880]: E1201 04:25:20.262697 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerName="extract-content" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.262812 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerName="extract-content" Dec 01 04:25:20 crc kubenswrapper[4880]: E1201 04:25:20.262972 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerName="registry-server" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.263058 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerName="registry-server" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.264079 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b08b4ff-09c7-4607-b45c-3cc00c2d53c8" containerName="registry-server" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.264228 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e6e2a3-0696-42c8-af85-deaae4b79884" containerName="registry-server" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.267301 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.270607 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl5rb"] Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.350236 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kn8d\" (UniqueName: \"kubernetes.io/projected/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-kube-api-access-8kn8d\") pod \"certified-operators-zl5rb\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.350328 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-catalog-content\") pod \"certified-operators-zl5rb\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.350407 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-utilities\") pod \"certified-operators-zl5rb\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.452449 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-utilities\") pod \"certified-operators-zl5rb\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.452539 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kn8d\" (UniqueName: \"kubernetes.io/projected/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-kube-api-access-8kn8d\") pod \"certified-operators-zl5rb\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.452608 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-catalog-content\") pod \"certified-operators-zl5rb\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.452943 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-utilities\") pod \"certified-operators-zl5rb\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.453027 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-catalog-content\") pod \"certified-operators-zl5rb\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.513075 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kn8d\" (UniqueName: \"kubernetes.io/projected/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-kube-api-access-8kn8d\") pod \"certified-operators-zl5rb\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.587366 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:20 crc kubenswrapper[4880]: I1201 04:25:20.986651 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl5rb"] Dec 01 04:25:21 crc kubenswrapper[4880]: I1201 04:25:21.258242 4880 generic.go:334] "Generic (PLEG): container finished" podID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerID="d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27" exitCode=0 Dec 01 04:25:21 crc kubenswrapper[4880]: I1201 04:25:21.258306 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl5rb" event={"ID":"ae319b69-b14d-44e8-ad61-65ccf69bbfd4","Type":"ContainerDied","Data":"d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27"} Dec 01 04:25:21 crc kubenswrapper[4880]: I1201 04:25:21.258348 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl5rb" event={"ID":"ae319b69-b14d-44e8-ad61-65ccf69bbfd4","Type":"ContainerStarted","Data":"101c95175a9a027f021b58e9e79351359d6e3c8be6d1f9ea438b39d94bb1783a"} Dec 01 04:25:22 crc kubenswrapper[4880]: I1201 04:25:22.281145 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl5rb" event={"ID":"ae319b69-b14d-44e8-ad61-65ccf69bbfd4","Type":"ContainerStarted","Data":"54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610"} Dec 01 04:25:24 crc kubenswrapper[4880]: I1201 04:25:24.306384 4880 generic.go:334] "Generic (PLEG): container finished" podID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerID="54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610" exitCode=0 Dec 01 04:25:24 crc kubenswrapper[4880]: I1201 04:25:24.306581 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl5rb" event={"ID":"ae319b69-b14d-44e8-ad61-65ccf69bbfd4","Type":"ContainerDied","Data":"54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610"} Dec 01 04:25:25 crc kubenswrapper[4880]: I1201 04:25:25.322615 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl5rb" event={"ID":"ae319b69-b14d-44e8-ad61-65ccf69bbfd4","Type":"ContainerStarted","Data":"e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c"} Dec 01 04:25:25 crc kubenswrapper[4880]: I1201 04:25:25.360460 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zl5rb" podStartSLOduration=1.873166376 podStartE2EDuration="5.360439934s" podCreationTimestamp="2025-12-01 04:25:20 +0000 UTC" firstStartedPulling="2025-12-01 04:25:21.25992422 +0000 UTC m=+5350.771178592" lastFinishedPulling="2025-12-01 04:25:24.747197778 +0000 UTC m=+5354.258452150" observedRunningTime="2025-12-01 04:25:25.34722649 +0000 UTC m=+5354.858480882" watchObservedRunningTime="2025-12-01 04:25:25.360439934 +0000 UTC m=+5354.871694316" Dec 01 04:25:30 crc kubenswrapper[4880]: I1201 04:25:30.587759 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:30 crc kubenswrapper[4880]: I1201 04:25:30.588457 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:30 crc kubenswrapper[4880]: I1201 04:25:30.681549 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:31 crc kubenswrapper[4880]: I1201 04:25:31.429921 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:31 crc kubenswrapper[4880]: I1201 04:25:31.494731 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zl5rb"] Dec 01 04:25:33 crc kubenswrapper[4880]: I1201 04:25:33.399739 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zl5rb" podUID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerName="registry-server" containerID="cri-o://e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c" gracePeriod=2 Dec 01 04:25:33 crc kubenswrapper[4880]: I1201 04:25:33.919251 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.092975 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-utilities\") pod \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.093100 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kn8d\" (UniqueName: \"kubernetes.io/projected/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-kube-api-access-8kn8d\") pod \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.093167 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-catalog-content\") pod \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\" (UID: \"ae319b69-b14d-44e8-ad61-65ccf69bbfd4\") " Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.094077 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-utilities" (OuterVolumeSpecName: "utilities") pod "ae319b69-b14d-44e8-ad61-65ccf69bbfd4" (UID: "ae319b69-b14d-44e8-ad61-65ccf69bbfd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.100110 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-kube-api-access-8kn8d" (OuterVolumeSpecName: "kube-api-access-8kn8d") pod "ae319b69-b14d-44e8-ad61-65ccf69bbfd4" (UID: "ae319b69-b14d-44e8-ad61-65ccf69bbfd4"). InnerVolumeSpecName "kube-api-access-8kn8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.145417 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae319b69-b14d-44e8-ad61-65ccf69bbfd4" (UID: "ae319b69-b14d-44e8-ad61-65ccf69bbfd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.195563 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.195633 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kn8d\" (UniqueName: \"kubernetes.io/projected/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-kube-api-access-8kn8d\") on node \"crc\" DevicePath \"\"" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.195656 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae319b69-b14d-44e8-ad61-65ccf69bbfd4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.427181 4880 generic.go:334] "Generic (PLEG): container finished" podID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerID="e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c" exitCode=0 Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.428499 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl5rb" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.428477 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl5rb" event={"ID":"ae319b69-b14d-44e8-ad61-65ccf69bbfd4","Type":"ContainerDied","Data":"e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c"} Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.428821 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl5rb" event={"ID":"ae319b69-b14d-44e8-ad61-65ccf69bbfd4","Type":"ContainerDied","Data":"101c95175a9a027f021b58e9e79351359d6e3c8be6d1f9ea438b39d94bb1783a"} Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.428861 4880 scope.go:117] "RemoveContainer" containerID="e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.467936 4880 scope.go:117] "RemoveContainer" containerID="54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.495923 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zl5rb"] Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.506468 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zl5rb"] Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.512114 4880 scope.go:117] "RemoveContainer" containerID="d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.563964 4880 scope.go:117] "RemoveContainer" containerID="e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c" Dec 01 04:25:34 crc kubenswrapper[4880]: E1201 04:25:34.564394 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c\": container with ID starting with e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c not found: ID does not exist" containerID="e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.564441 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c"} err="failed to get container status \"e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c\": rpc error: code = NotFound desc = could not find container \"e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c\": container with ID starting with e1784e22f0a9b908f016537a01654192ffb136e3933d9228797bd7f3508ade6c not found: ID does not exist" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.564474 4880 scope.go:117] "RemoveContainer" containerID="54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610" Dec 01 04:25:34 crc kubenswrapper[4880]: E1201 04:25:34.565466 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610\": container with ID starting with 54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610 not found: ID does not exist" containerID="54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.565510 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610"} err="failed to get container status \"54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610\": rpc error: code = NotFound desc = could not find container \"54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610\": container with ID starting with 54a7636330471c8c05339117dc19496973042ad851a25d44d8c1f82b40273610 not found: ID does not exist" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.565537 4880 scope.go:117] "RemoveContainer" containerID="d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27" Dec 01 04:25:34 crc kubenswrapper[4880]: E1201 04:25:34.566086 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27\": container with ID starting with d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27 not found: ID does not exist" containerID="d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.566286 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27"} err="failed to get container status \"d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27\": rpc error: code = NotFound desc = could not find container \"d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27\": container with ID starting with d4192e763f497990b7a7c97b5d4318a151acb33eb4a041d9fc801a8b534f8b27 not found: ID does not exist" Dec 01 04:25:34 crc kubenswrapper[4880]: I1201 04:25:34.798955 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" path="/var/lib/kubelet/pods/ae319b69-b14d-44e8-ad61-65ccf69bbfd4/volumes" Dec 01 04:26:47 crc kubenswrapper[4880]: I1201 04:26:47.369118 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:26:47 crc kubenswrapper[4880]: I1201 04:26:47.369713 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:26:53 crc kubenswrapper[4880]: I1201 04:26:53.790342 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="70645748-70d3-43e0-a111-440adaacf742" containerName="galera" probeResult="failure" output="command timed out" Dec 01 04:26:59 crc kubenswrapper[4880]: E1201 04:26:59.707356 4880 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:51466->38.102.83.39:42095: write tcp 38.102.83.39:51466->38.102.83.39:42095: write: broken pipe Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.064290 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqdms"] Dec 01 04:27:10 crc kubenswrapper[4880]: E1201 04:27:10.065123 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerName="extract-utilities" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.065136 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerName="extract-utilities" Dec 01 04:27:10 crc kubenswrapper[4880]: E1201 04:27:10.065159 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerName="registry-server" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.065166 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerName="registry-server" Dec 01 04:27:10 crc kubenswrapper[4880]: E1201 04:27:10.065180 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerName="extract-content" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.065186 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerName="extract-content" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.065368 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae319b69-b14d-44e8-ad61-65ccf69bbfd4" containerName="registry-server" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.066689 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.077791 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqdms"] Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.167009 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-catalog-content\") pod \"community-operators-hqdms\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.167168 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-utilities\") pod \"community-operators-hqdms\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.167420 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4868\" (UniqueName: \"kubernetes.io/projected/85a53bfb-7aaa-4467-998e-342e1a1aaea6-kube-api-access-d4868\") pod \"community-operators-hqdms\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.268835 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4868\" (UniqueName: \"kubernetes.io/projected/85a53bfb-7aaa-4467-998e-342e1a1aaea6-kube-api-access-d4868\") pod \"community-operators-hqdms\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.269165 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-catalog-content\") pod \"community-operators-hqdms\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.269245 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-utilities\") pod \"community-operators-hqdms\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.269651 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-catalog-content\") pod \"community-operators-hqdms\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.269740 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-utilities\") pod \"community-operators-hqdms\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.308801 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4868\" (UniqueName: \"kubernetes.io/projected/85a53bfb-7aaa-4467-998e-342e1a1aaea6-kube-api-access-d4868\") pod \"community-operators-hqdms\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.385204 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:10 crc kubenswrapper[4880]: I1201 04:27:10.935777 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqdms"] Dec 01 04:27:11 crc kubenswrapper[4880]: I1201 04:27:11.470295 4880 generic.go:334] "Generic (PLEG): container finished" podID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerID="c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8" exitCode=0 Dec 01 04:27:11 crc kubenswrapper[4880]: I1201 04:27:11.470357 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdms" event={"ID":"85a53bfb-7aaa-4467-998e-342e1a1aaea6","Type":"ContainerDied","Data":"c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8"} Dec 01 04:27:11 crc kubenswrapper[4880]: I1201 04:27:11.472064 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdms" event={"ID":"85a53bfb-7aaa-4467-998e-342e1a1aaea6","Type":"ContainerStarted","Data":"56d8e4bce486a6952966b3e36aaae5f4bd0774794eb7c2d0f517eb470c7089ca"} Dec 01 04:27:11 crc kubenswrapper[4880]: I1201 04:27:11.475307 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 04:27:12 crc kubenswrapper[4880]: I1201 04:27:12.484758 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdms" event={"ID":"85a53bfb-7aaa-4467-998e-342e1a1aaea6","Type":"ContainerStarted","Data":"5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe"} Dec 01 04:27:14 crc kubenswrapper[4880]: I1201 04:27:14.507487 4880 generic.go:334] "Generic (PLEG): container finished" podID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerID="5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe" exitCode=0 Dec 01 04:27:14 crc kubenswrapper[4880]: I1201 04:27:14.507589 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdms" event={"ID":"85a53bfb-7aaa-4467-998e-342e1a1aaea6","Type":"ContainerDied","Data":"5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe"} Dec 01 04:27:15 crc kubenswrapper[4880]: I1201 04:27:15.517757 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdms" event={"ID":"85a53bfb-7aaa-4467-998e-342e1a1aaea6","Type":"ContainerStarted","Data":"d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d"} Dec 01 04:27:15 crc kubenswrapper[4880]: I1201 04:27:15.548827 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqdms" podStartSLOduration=2.087041901 podStartE2EDuration="5.548808114s" podCreationTimestamp="2025-12-01 04:27:10 +0000 UTC" firstStartedPulling="2025-12-01 04:27:11.474967025 +0000 UTC m=+5460.986221407" lastFinishedPulling="2025-12-01 04:27:14.936733238 +0000 UTC m=+5464.447987620" observedRunningTime="2025-12-01 04:27:15.535048697 +0000 UTC m=+5465.046303069" watchObservedRunningTime="2025-12-01 04:27:15.548808114 +0000 UTC m=+5465.060062486" Dec 01 04:27:17 crc kubenswrapper[4880]: I1201 04:27:17.369501 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:27:17 crc kubenswrapper[4880]: I1201 04:27:17.370630 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:27:20 crc kubenswrapper[4880]: I1201 04:27:20.385802 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:20 crc kubenswrapper[4880]: I1201 04:27:20.386394 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:20 crc kubenswrapper[4880]: I1201 04:27:20.476245 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:20 crc kubenswrapper[4880]: I1201 04:27:20.617235 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:20 crc kubenswrapper[4880]: I1201 04:27:20.725284 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hqdms"] Dec 01 04:27:22 crc kubenswrapper[4880]: I1201 04:27:22.609099 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hqdms" podUID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerName="registry-server" containerID="cri-o://d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d" gracePeriod=2 Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.083844 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.234527 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-catalog-content\") pod \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.234687 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4868\" (UniqueName: \"kubernetes.io/projected/85a53bfb-7aaa-4467-998e-342e1a1aaea6-kube-api-access-d4868\") pod \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.234916 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-utilities\") pod \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\" (UID: \"85a53bfb-7aaa-4467-998e-342e1a1aaea6\") " Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.235653 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-utilities" (OuterVolumeSpecName: "utilities") pod "85a53bfb-7aaa-4467-998e-342e1a1aaea6" (UID: "85a53bfb-7aaa-4467-998e-342e1a1aaea6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.241761 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a53bfb-7aaa-4467-998e-342e1a1aaea6-kube-api-access-d4868" (OuterVolumeSpecName: "kube-api-access-d4868") pod "85a53bfb-7aaa-4467-998e-342e1a1aaea6" (UID: "85a53bfb-7aaa-4467-998e-342e1a1aaea6"). InnerVolumeSpecName "kube-api-access-d4868". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.287327 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85a53bfb-7aaa-4467-998e-342e1a1aaea6" (UID: "85a53bfb-7aaa-4467-998e-342e1a1aaea6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.337039 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.337070 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85a53bfb-7aaa-4467-998e-342e1a1aaea6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.337081 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4868\" (UniqueName: \"kubernetes.io/projected/85a53bfb-7aaa-4467-998e-342e1a1aaea6-kube-api-access-d4868\") on node \"crc\" DevicePath \"\"" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.622810 4880 generic.go:334] "Generic (PLEG): container finished" podID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerID="d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d" exitCode=0 Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.622850 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdms" event={"ID":"85a53bfb-7aaa-4467-998e-342e1a1aaea6","Type":"ContainerDied","Data":"d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d"} Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.622934 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqdms" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.623095 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqdms" event={"ID":"85a53bfb-7aaa-4467-998e-342e1a1aaea6","Type":"ContainerDied","Data":"56d8e4bce486a6952966b3e36aaae5f4bd0774794eb7c2d0f517eb470c7089ca"} Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.623114 4880 scope.go:117] "RemoveContainer" containerID="d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.656831 4880 scope.go:117] "RemoveContainer" containerID="5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.692520 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hqdms"] Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.692693 4880 scope.go:117] "RemoveContainer" containerID="c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.727836 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hqdms"] Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.733170 4880 scope.go:117] "RemoveContainer" containerID="d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d" Dec 01 04:27:23 crc kubenswrapper[4880]: E1201 04:27:23.734762 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d\": container with ID starting with d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d not found: ID does not exist" containerID="d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.734813 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d"} err="failed to get container status \"d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d\": rpc error: code = NotFound desc = could not find container \"d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d\": container with ID starting with d0a7319621b2600a66ade08f1aa13501eb288e43b44e3bb6868b8c08188ac53d not found: ID does not exist" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.734846 4880 scope.go:117] "RemoveContainer" containerID="5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe" Dec 01 04:27:23 crc kubenswrapper[4880]: E1201 04:27:23.735447 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe\": container with ID starting with 5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe not found: ID does not exist" containerID="5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.735490 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe"} err="failed to get container status \"5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe\": rpc error: code = NotFound desc = could not find container \"5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe\": container with ID starting with 5dfabb383706e2c4353f29070cb20cee7c48500d5f774332c42996aade3723fe not found: ID does not exist" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.735547 4880 scope.go:117] "RemoveContainer" containerID="c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8" Dec 01 04:27:23 crc kubenswrapper[4880]: E1201 04:27:23.736344 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8\": container with ID starting with c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8 not found: ID does not exist" containerID="c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8" Dec 01 04:27:23 crc kubenswrapper[4880]: I1201 04:27:23.736383 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8"} err="failed to get container status \"c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8\": rpc error: code = NotFound desc = could not find container \"c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8\": container with ID starting with c576baa6bdfb27d93dcf817819a8915b2f811f55920b5df20896cc56657cf9a8 not found: ID does not exist" Dec 01 04:27:24 crc kubenswrapper[4880]: I1201 04:27:24.800193 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" path="/var/lib/kubelet/pods/85a53bfb-7aaa-4467-998e-342e1a1aaea6/volumes" Dec 01 04:27:43 crc kubenswrapper[4880]: I1201 04:27:43.977165 4880 scope.go:117] "RemoveContainer" containerID="89132bd2e4734549f527e943496045244d6bfd11422d9dbd249254f4066bc7cf" Dec 01 04:27:44 crc kubenswrapper[4880]: I1201 04:27:44.004971 4880 scope.go:117] "RemoveContainer" containerID="1b1a8464a37233b8b8b457dd94723f962b93b9e5d90eb85895ee8f5c7b90aac5" Dec 01 04:27:44 crc kubenswrapper[4880]: I1201 04:27:44.044090 4880 scope.go:117] "RemoveContainer" containerID="b2e6dcc0e38d1be96495b05aea49fd63ded1d6ca1995c9ed8c098a5119218750" Dec 01 04:27:47 crc kubenswrapper[4880]: I1201 04:27:47.369720 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:27:47 crc kubenswrapper[4880]: I1201 04:27:47.370215 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:27:47 crc kubenswrapper[4880]: I1201 04:27:47.370282 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:27:47 crc kubenswrapper[4880]: I1201 04:27:47.371401 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd58a34c72b46c115c5a2ffd467b9e557213b09163cc6879a8bc7a510df98b1e"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:27:47 crc kubenswrapper[4880]: I1201 04:27:47.371496 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://fd58a34c72b46c115c5a2ffd467b9e557213b09163cc6879a8bc7a510df98b1e" gracePeriod=600 Dec 01 04:27:48 crc kubenswrapper[4880]: I1201 04:27:48.103219 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="fd58a34c72b46c115c5a2ffd467b9e557213b09163cc6879a8bc7a510df98b1e" exitCode=0 Dec 01 04:27:48 crc kubenswrapper[4880]: I1201 04:27:48.103292 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"fd58a34c72b46c115c5a2ffd467b9e557213b09163cc6879a8bc7a510df98b1e"} Dec 01 04:27:48 crc kubenswrapper[4880]: I1201 04:27:48.103594 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e"} Dec 01 04:27:48 crc kubenswrapper[4880]: I1201 04:27:48.103623 4880 scope.go:117] "RemoveContainer" containerID="837a8301e9872c250715b72eb6739fb9daa058365df4a150d65884f231be29a8" Dec 01 04:29:24 crc kubenswrapper[4880]: I1201 04:29:24.345735 4880 generic.go:334] "Generic (PLEG): container finished" podID="b0802478-b2a7-43fa-bcba-1e7a154e9572" containerID="a9209d448d332d69de51dd47f83edd03dfeb93fa9d40c818bb333075dd953359" exitCode=0 Dec 01 04:29:24 crc kubenswrapper[4880]: I1201 04:29:24.345854 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"b0802478-b2a7-43fa-bcba-1e7a154e9572","Type":"ContainerDied","Data":"a9209d448d332d69de51dd47f83edd03dfeb93fa9d40c818bb333075dd953359"} Dec 01 04:29:25 crc kubenswrapper[4880]: I1201 04:29:25.990164 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.088294 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Dec 01 04:29:26 crc kubenswrapper[4880]: E1201 04:29:26.088760 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerName="registry-server" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.088777 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerName="registry-server" Dec 01 04:29:26 crc kubenswrapper[4880]: E1201 04:29:26.088792 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerName="extract-content" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.088815 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerName="extract-content" Dec 01 04:29:26 crc kubenswrapper[4880]: E1201 04:29:26.088827 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0802478-b2a7-43fa-bcba-1e7a154e9572" containerName="tempest-tests-tempest-tests-runner" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.088833 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0802478-b2a7-43fa-bcba-1e7a154e9572" containerName="tempest-tests-tempest-tests-runner" Dec 01 04:29:26 crc kubenswrapper[4880]: E1201 04:29:26.088849 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerName="extract-utilities" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.088856 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerName="extract-utilities" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.089108 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a53bfb-7aaa-4467-998e-342e1a1aaea6" containerName="registry-server" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.089149 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0802478-b2a7-43fa-bcba-1e7a154e9572" containerName="tempest-tests-tempest-tests-runner" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.089959 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.099227 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.099230 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.111217 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.176679 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ssh-key\") pod \"b0802478-b2a7-43fa-bcba-1e7a154e9572\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.176794 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-workdir\") pod \"b0802478-b2a7-43fa-bcba-1e7a154e9572\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.176816 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config-secret\") pod \"b0802478-b2a7-43fa-bcba-1e7a154e9572\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.176839 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ca-certs\") pod \"b0802478-b2a7-43fa-bcba-1e7a154e9572\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.176887 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config\") pod \"b0802478-b2a7-43fa-bcba-1e7a154e9572\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.176958 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-config-data\") pod \"b0802478-b2a7-43fa-bcba-1e7a154e9572\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.177155 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwzzh\" (UniqueName: \"kubernetes.io/projected/b0802478-b2a7-43fa-bcba-1e7a154e9572-kube-api-access-zwzzh\") pod \"b0802478-b2a7-43fa-bcba-1e7a154e9572\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.177532 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"b0802478-b2a7-43fa-bcba-1e7a154e9572\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.177806 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-temporary\") pod \"b0802478-b2a7-43fa-bcba-1e7a154e9572\" (UID: \"b0802478-b2a7-43fa-bcba-1e7a154e9572\") " Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.178036 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-config-data" (OuterVolumeSpecName: "config-data") pod "b0802478-b2a7-43fa-bcba-1e7a154e9572" (UID: "b0802478-b2a7-43fa-bcba-1e7a154e9572"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.178734 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.182382 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b0802478-b2a7-43fa-bcba-1e7a154e9572" (UID: "b0802478-b2a7-43fa-bcba-1e7a154e9572"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.185309 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b0802478-b2a7-43fa-bcba-1e7a154e9572" (UID: "b0802478-b2a7-43fa-bcba-1e7a154e9572"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.191337 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0802478-b2a7-43fa-bcba-1e7a154e9572-kube-api-access-zwzzh" (OuterVolumeSpecName: "kube-api-access-zwzzh") pod "b0802478-b2a7-43fa-bcba-1e7a154e9572" (UID: "b0802478-b2a7-43fa-bcba-1e7a154e9572"). InnerVolumeSpecName "kube-api-access-zwzzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.207734 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b0802478-b2a7-43fa-bcba-1e7a154e9572" (UID: "b0802478-b2a7-43fa-bcba-1e7a154e9572"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.213190 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b0802478-b2a7-43fa-bcba-1e7a154e9572" (UID: "b0802478-b2a7-43fa-bcba-1e7a154e9572"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.216056 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b0802478-b2a7-43fa-bcba-1e7a154e9572" (UID: "b0802478-b2a7-43fa-bcba-1e7a154e9572"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.233633 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b0802478-b2a7-43fa-bcba-1e7a154e9572" (UID: "b0802478-b2a7-43fa-bcba-1e7a154e9572"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.252794 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b0802478-b2a7-43fa-bcba-1e7a154e9572" (UID: "b0802478-b2a7-43fa-bcba-1e7a154e9572"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280511 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280576 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkmwq\" (UniqueName: \"kubernetes.io/projected/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-kube-api-access-nkmwq\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280625 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280660 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280695 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280767 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280821 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280881 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280904 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280953 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280966 4880 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280977 4880 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280987 4880 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b0802478-b2a7-43fa-bcba-1e7a154e9572-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.280995 4880 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0802478-b2a7-43fa-bcba-1e7a154e9572-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.281004 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwzzh\" (UniqueName: \"kubernetes.io/projected/b0802478-b2a7-43fa-bcba-1e7a154e9572-kube-api-access-zwzzh\") on node \"crc\" DevicePath \"\"" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.281013 4880 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b0802478-b2a7-43fa-bcba-1e7a154e9572-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.322558 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.366234 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"b0802478-b2a7-43fa-bcba-1e7a154e9572","Type":"ContainerDied","Data":"968e92da00b26f7d12637e0b6c5b8fbdbf94affdae08e0305bb6b89ace01ca29"} Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.366269 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968e92da00b26f7d12637e0b6c5b8fbdbf94affdae08e0305bb6b89ace01ca29" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.366308 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.382157 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.382207 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.382249 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.382287 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.382320 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.382345 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.382362 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.382424 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkmwq\" (UniqueName: \"kubernetes.io/projected/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-kube-api-access-nkmwq\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.383511 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.383649 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.383792 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.384567 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.386795 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.386974 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.389589 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.401832 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkmwq\" (UniqueName: \"kubernetes.io/projected/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-kube-api-access-nkmwq\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.406740 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 04:29:26 crc kubenswrapper[4880]: I1201 04:29:26.949946 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Dec 01 04:29:27 crc kubenswrapper[4880]: I1201 04:29:27.375612 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"094b499c-7f84-4ecc-b2dd-9792ecdb54a4","Type":"ContainerStarted","Data":"b09e8cfaa4095d2440bdb1f3c83edbf523ff9ffc36e4ccd8419ba042828bd6b4"} Dec 01 04:29:29 crc kubenswrapper[4880]: I1201 04:29:29.402218 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"094b499c-7f84-4ecc-b2dd-9792ecdb54a4","Type":"ContainerStarted","Data":"28ae51c44cc948fa74c9b51048d573fbb8ba14b214a41ca5da59e26b73ec6bea"} Dec 01 04:29:29 crc kubenswrapper[4880]: I1201 04:29:29.425711 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=3.42568827 podStartE2EDuration="3.42568827s" podCreationTimestamp="2025-12-01 04:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 04:29:29.41913238 +0000 UTC m=+5598.930386762" watchObservedRunningTime="2025-12-01 04:29:29.42568827 +0000 UTC m=+5598.936942652" Dec 01 04:29:47 crc kubenswrapper[4880]: I1201 04:29:47.369645 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:29:47 crc kubenswrapper[4880]: I1201 04:29:47.370479 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.158198 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w"] Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.160346 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.162656 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.166233 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.170311 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w"] Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.303176 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d15940db-2356-4f35-80cb-7ac782831281-config-volume\") pod \"collect-profiles-29409390-4694w\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.303489 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns4cv\" (UniqueName: \"kubernetes.io/projected/d15940db-2356-4f35-80cb-7ac782831281-kube-api-access-ns4cv\") pod \"collect-profiles-29409390-4694w\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.303628 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d15940db-2356-4f35-80cb-7ac782831281-secret-volume\") pod \"collect-profiles-29409390-4694w\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.406281 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d15940db-2356-4f35-80cb-7ac782831281-secret-volume\") pod \"collect-profiles-29409390-4694w\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.406506 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d15940db-2356-4f35-80cb-7ac782831281-config-volume\") pod \"collect-profiles-29409390-4694w\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.406590 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns4cv\" (UniqueName: \"kubernetes.io/projected/d15940db-2356-4f35-80cb-7ac782831281-kube-api-access-ns4cv\") pod \"collect-profiles-29409390-4694w\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.408003 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d15940db-2356-4f35-80cb-7ac782831281-config-volume\") pod \"collect-profiles-29409390-4694w\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.413229 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d15940db-2356-4f35-80cb-7ac782831281-secret-volume\") pod \"collect-profiles-29409390-4694w\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.425829 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns4cv\" (UniqueName: \"kubernetes.io/projected/d15940db-2356-4f35-80cb-7ac782831281-kube-api-access-ns4cv\") pod \"collect-profiles-29409390-4694w\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.496265 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:00 crc kubenswrapper[4880]: I1201 04:30:00.986736 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w"] Dec 01 04:30:01 crc kubenswrapper[4880]: I1201 04:30:01.757621 4880 generic.go:334] "Generic (PLEG): container finished" podID="d15940db-2356-4f35-80cb-7ac782831281" containerID="e23c5b1249c3b6b57dcb4f0a39b1f63e910b95e853cf5fa7531fb2f519e38790" exitCode=0 Dec 01 04:30:01 crc kubenswrapper[4880]: I1201 04:30:01.757970 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" event={"ID":"d15940db-2356-4f35-80cb-7ac782831281","Type":"ContainerDied","Data":"e23c5b1249c3b6b57dcb4f0a39b1f63e910b95e853cf5fa7531fb2f519e38790"} Dec 01 04:30:01 crc kubenswrapper[4880]: I1201 04:30:01.758004 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" event={"ID":"d15940db-2356-4f35-80cb-7ac782831281","Type":"ContainerStarted","Data":"666c2adb3a4dc7122ef56ec5b4f0c4c167bd822e39f5ecfc094108495bb6c399"} Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.184082 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.365299 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns4cv\" (UniqueName: \"kubernetes.io/projected/d15940db-2356-4f35-80cb-7ac782831281-kube-api-access-ns4cv\") pod \"d15940db-2356-4f35-80cb-7ac782831281\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.365383 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d15940db-2356-4f35-80cb-7ac782831281-config-volume\") pod \"d15940db-2356-4f35-80cb-7ac782831281\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.365463 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d15940db-2356-4f35-80cb-7ac782831281-secret-volume\") pod \"d15940db-2356-4f35-80cb-7ac782831281\" (UID: \"d15940db-2356-4f35-80cb-7ac782831281\") " Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.366098 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15940db-2356-4f35-80cb-7ac782831281-config-volume" (OuterVolumeSpecName: "config-volume") pod "d15940db-2356-4f35-80cb-7ac782831281" (UID: "d15940db-2356-4f35-80cb-7ac782831281"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.372324 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d15940db-2356-4f35-80cb-7ac782831281-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d15940db-2356-4f35-80cb-7ac782831281" (UID: "d15940db-2356-4f35-80cb-7ac782831281"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.374799 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15940db-2356-4f35-80cb-7ac782831281-kube-api-access-ns4cv" (OuterVolumeSpecName: "kube-api-access-ns4cv") pod "d15940db-2356-4f35-80cb-7ac782831281" (UID: "d15940db-2356-4f35-80cb-7ac782831281"). InnerVolumeSpecName "kube-api-access-ns4cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.467230 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns4cv\" (UniqueName: \"kubernetes.io/projected/d15940db-2356-4f35-80cb-7ac782831281-kube-api-access-ns4cv\") on node \"crc\" DevicePath \"\"" Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.467956 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d15940db-2356-4f35-80cb-7ac782831281-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.467989 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d15940db-2356-4f35-80cb-7ac782831281-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.787536 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" event={"ID":"d15940db-2356-4f35-80cb-7ac782831281","Type":"ContainerDied","Data":"666c2adb3a4dc7122ef56ec5b4f0c4c167bd822e39f5ecfc094108495bb6c399"} Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.787892 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="666c2adb3a4dc7122ef56ec5b4f0c4c167bd822e39f5ecfc094108495bb6c399" Dec 01 04:30:03 crc kubenswrapper[4880]: I1201 04:30:03.787607 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w" Dec 01 04:30:04 crc kubenswrapper[4880]: I1201 04:30:04.265921 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4"] Dec 01 04:30:04 crc kubenswrapper[4880]: I1201 04:30:04.275828 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409345-pzfv4"] Dec 01 04:30:04 crc kubenswrapper[4880]: I1201 04:30:04.835275 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d231f53-ae4c-410a-a355-25f4dc70fda6" path="/var/lib/kubelet/pods/0d231f53-ae4c-410a-a355-25f4dc70fda6/volumes" Dec 01 04:30:17 crc kubenswrapper[4880]: I1201 04:30:17.369162 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:30:17 crc kubenswrapper[4880]: I1201 04:30:17.369924 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.358773 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5874f859b7-82pgv"] Dec 01 04:30:21 crc kubenswrapper[4880]: E1201 04:30:21.359748 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15940db-2356-4f35-80cb-7ac782831281" containerName="collect-profiles" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.359771 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15940db-2356-4f35-80cb-7ac782831281" containerName="collect-profiles" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.360052 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15940db-2356-4f35-80cb-7ac782831281" containerName="collect-profiles" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.361043 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.383243 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-config\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.383286 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-ovndb-tls-certs\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.383331 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2g7m\" (UniqueName: \"kubernetes.io/projected/ead09403-3fb6-4417-a9d9-694b3070c66d-kube-api-access-c2g7m\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.383360 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-internal-tls-certs\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.383495 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-public-tls-certs\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.383565 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-httpd-config\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.383629 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-combined-ca-bundle\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.386123 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5874f859b7-82pgv"] Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.485980 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-ovndb-tls-certs\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.486070 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2g7m\" (UniqueName: \"kubernetes.io/projected/ead09403-3fb6-4417-a9d9-694b3070c66d-kube-api-access-c2g7m\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.486112 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-internal-tls-certs\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.486217 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-public-tls-certs\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.486298 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-httpd-config\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.486370 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-combined-ca-bundle\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.486420 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-config\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.503433 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-combined-ca-bundle\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.518645 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-httpd-config\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.520078 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-internal-tls-certs\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.531454 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2g7m\" (UniqueName: \"kubernetes.io/projected/ead09403-3fb6-4417-a9d9-694b3070c66d-kube-api-access-c2g7m\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.534059 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-ovndb-tls-certs\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.535883 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-public-tls-certs\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.536018 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-config\") pod \"neutron-5874f859b7-82pgv\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:21 crc kubenswrapper[4880]: I1201 04:30:21.679047 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:22 crc kubenswrapper[4880]: I1201 04:30:22.254298 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5874f859b7-82pgv"] Dec 01 04:30:23 crc kubenswrapper[4880]: I1201 04:30:23.045864 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5874f859b7-82pgv" event={"ID":"ead09403-3fb6-4417-a9d9-694b3070c66d","Type":"ContainerStarted","Data":"35b1fc4a09d622638d3eb42da8cce8dd8c2da920b72b88ff2050e1d94bb53189"} Dec 01 04:30:23 crc kubenswrapper[4880]: I1201 04:30:23.046285 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:23 crc kubenswrapper[4880]: I1201 04:30:23.046301 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5874f859b7-82pgv" event={"ID":"ead09403-3fb6-4417-a9d9-694b3070c66d","Type":"ContainerStarted","Data":"84ff8cf7c9b10f65b33e017516fc50cd8be21fc5f9db57fa0cf51d59403bc16a"} Dec 01 04:30:23 crc kubenswrapper[4880]: I1201 04:30:23.046313 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5874f859b7-82pgv" event={"ID":"ead09403-3fb6-4417-a9d9-694b3070c66d","Type":"ContainerStarted","Data":"d5cea0f0b69b07ec884d35cd3b99bbf554ce762527ff3f4ee60de1d5b0f4bd69"} Dec 01 04:30:23 crc kubenswrapper[4880]: I1201 04:30:23.073145 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5874f859b7-82pgv" podStartSLOduration=2.073124352 podStartE2EDuration="2.073124352s" podCreationTimestamp="2025-12-01 04:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 04:30:23.069193166 +0000 UTC m=+5652.580447548" watchObservedRunningTime="2025-12-01 04:30:23.073124352 +0000 UTC m=+5652.584378724" Dec 01 04:30:44 crc kubenswrapper[4880]: I1201 04:30:44.216432 4880 scope.go:117] "RemoveContainer" containerID="67c0861ad1431184f7eb6884d9960ca38a4efc0c69d65f4a077683e6270cac2d" Dec 01 04:30:47 crc kubenswrapper[4880]: I1201 04:30:47.368926 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:30:47 crc kubenswrapper[4880]: I1201 04:30:47.369511 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:30:47 crc kubenswrapper[4880]: I1201 04:30:47.369584 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:30:47 crc kubenswrapper[4880]: I1201 04:30:47.370763 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:30:47 crc kubenswrapper[4880]: I1201 04:30:47.370901 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" gracePeriod=600 Dec 01 04:30:47 crc kubenswrapper[4880]: E1201 04:30:47.504388 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:30:48 crc kubenswrapper[4880]: I1201 04:30:48.353509 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" exitCode=0 Dec 01 04:30:48 crc kubenswrapper[4880]: I1201 04:30:48.354194 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e"} Dec 01 04:30:48 crc kubenswrapper[4880]: I1201 04:30:48.354325 4880 scope.go:117] "RemoveContainer" containerID="fd58a34c72b46c115c5a2ffd467b9e557213b09163cc6879a8bc7a510df98b1e" Dec 01 04:30:48 crc kubenswrapper[4880]: I1201 04:30:48.355140 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:30:48 crc kubenswrapper[4880]: E1201 04:30:48.355532 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:30:51 crc kubenswrapper[4880]: I1201 04:30:51.708060 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5874f859b7-82pgv" Dec 01 04:30:51 crc kubenswrapper[4880]: I1201 04:30:51.814127 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d47fdc4c7-xl94f"] Dec 01 04:30:51 crc kubenswrapper[4880]: I1201 04:30:51.815262 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d47fdc4c7-xl94f" podUID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerName="neutron-api" containerID="cri-o://12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61" gracePeriod=30 Dec 01 04:30:51 crc kubenswrapper[4880]: I1201 04:30:51.815596 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d47fdc4c7-xl94f" podUID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerName="neutron-httpd" containerID="cri-o://4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617" gracePeriod=30 Dec 01 04:30:52 crc kubenswrapper[4880]: I1201 04:30:52.398399 4880 generic.go:334] "Generic (PLEG): container finished" podID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerID="4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617" exitCode=0 Dec 01 04:30:52 crc kubenswrapper[4880]: I1201 04:30:52.398626 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d47fdc4c7-xl94f" event={"ID":"3b132fb3-f361-4b31-a0b7-73af662a12a6","Type":"ContainerDied","Data":"4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617"} Dec 01 04:31:01 crc kubenswrapper[4880]: I1201 04:31:01.784487 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:31:01 crc kubenswrapper[4880]: E1201 04:31:01.787594 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.577404 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.611532 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-combined-ca-bundle\") pod \"3b132fb3-f361-4b31-a0b7-73af662a12a6\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.611783 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-httpd-config\") pod \"3b132fb3-f361-4b31-a0b7-73af662a12a6\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.612073 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-internal-tls-certs\") pod \"3b132fb3-f361-4b31-a0b7-73af662a12a6\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.612265 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-public-tls-certs\") pod \"3b132fb3-f361-4b31-a0b7-73af662a12a6\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.613465 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-ovndb-tls-certs\") pod \"3b132fb3-f361-4b31-a0b7-73af662a12a6\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.614223 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-config\") pod \"3b132fb3-f361-4b31-a0b7-73af662a12a6\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.614296 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjhpq\" (UniqueName: \"kubernetes.io/projected/3b132fb3-f361-4b31-a0b7-73af662a12a6-kube-api-access-sjhpq\") pod \"3b132fb3-f361-4b31-a0b7-73af662a12a6\" (UID: \"3b132fb3-f361-4b31-a0b7-73af662a12a6\") " Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.625687 4880 generic.go:334] "Generic (PLEG): container finished" podID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerID="12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61" exitCode=0 Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.625735 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d47fdc4c7-xl94f" event={"ID":"3b132fb3-f361-4b31-a0b7-73af662a12a6","Type":"ContainerDied","Data":"12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61"} Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.625761 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d47fdc4c7-xl94f" event={"ID":"3b132fb3-f361-4b31-a0b7-73af662a12a6","Type":"ContainerDied","Data":"dab72ee89928ae1f1241e3691435fb27d7fd2f8961eb30f775a5783621825f4d"} Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.625777 4880 scope.go:117] "RemoveContainer" containerID="4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.626108 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d47fdc4c7-xl94f" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.636170 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b132fb3-f361-4b31-a0b7-73af662a12a6-kube-api-access-sjhpq" (OuterVolumeSpecName: "kube-api-access-sjhpq") pod "3b132fb3-f361-4b31-a0b7-73af662a12a6" (UID: "3b132fb3-f361-4b31-a0b7-73af662a12a6"). InnerVolumeSpecName "kube-api-access-sjhpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.636423 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3b132fb3-f361-4b31-a0b7-73af662a12a6" (UID: "3b132fb3-f361-4b31-a0b7-73af662a12a6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.684184 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-config" (OuterVolumeSpecName: "config") pod "3b132fb3-f361-4b31-a0b7-73af662a12a6" (UID: "3b132fb3-f361-4b31-a0b7-73af662a12a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.711345 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3b132fb3-f361-4b31-a0b7-73af662a12a6" (UID: "3b132fb3-f361-4b31-a0b7-73af662a12a6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.718560 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-config\") on node \"crc\" DevicePath \"\"" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.718602 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjhpq\" (UniqueName: \"kubernetes.io/projected/3b132fb3-f361-4b31-a0b7-73af662a12a6-kube-api-access-sjhpq\") on node \"crc\" DevicePath \"\"" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.718616 4880 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.718630 4880 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.720969 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b132fb3-f361-4b31-a0b7-73af662a12a6" (UID: "3b132fb3-f361-4b31-a0b7-73af662a12a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.721333 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3b132fb3-f361-4b31-a0b7-73af662a12a6" (UID: "3b132fb3-f361-4b31-a0b7-73af662a12a6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.723510 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3b132fb3-f361-4b31-a0b7-73af662a12a6" (UID: "3b132fb3-f361-4b31-a0b7-73af662a12a6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.740310 4880 scope.go:117] "RemoveContainer" containerID="12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.761605 4880 scope.go:117] "RemoveContainer" containerID="4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617" Dec 01 04:31:08 crc kubenswrapper[4880]: E1201 04:31:08.762638 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617\": container with ID starting with 4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617 not found: ID does not exist" containerID="4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.762672 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617"} err="failed to get container status \"4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617\": rpc error: code = NotFound desc = could not find container \"4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617\": container with ID starting with 4512411a24a75b8b4cf341e1644ebdd6f2ae59f0123799d1934f00d02196c617 not found: ID does not exist" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.762705 4880 scope.go:117] "RemoveContainer" containerID="12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61" Dec 01 04:31:08 crc kubenswrapper[4880]: E1201 04:31:08.763116 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61\": container with ID starting with 12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61 not found: ID does not exist" containerID="12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.763156 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61"} err="failed to get container status \"12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61\": rpc error: code = NotFound desc = could not find container \"12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61\": container with ID starting with 12f6d4d546501bb50e080aafaf0d063ed61bcfcced1eb75eba8460b153d68b61 not found: ID does not exist" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.821522 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.821868 4880 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.822104 4880 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b132fb3-f361-4b31-a0b7-73af662a12a6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.959446 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d47fdc4c7-xl94f"] Dec 01 04:31:08 crc kubenswrapper[4880]: I1201 04:31:08.976626 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d47fdc4c7-xl94f"] Dec 01 04:31:10 crc kubenswrapper[4880]: I1201 04:31:10.799117 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b132fb3-f361-4b31-a0b7-73af662a12a6" path="/var/lib/kubelet/pods/3b132fb3-f361-4b31-a0b7-73af662a12a6/volumes" Dec 01 04:31:14 crc kubenswrapper[4880]: I1201 04:31:14.783796 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:31:14 crc kubenswrapper[4880]: E1201 04:31:14.784791 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:31:27 crc kubenswrapper[4880]: I1201 04:31:27.795242 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:31:27 crc kubenswrapper[4880]: E1201 04:31:27.796949 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:31:40 crc kubenswrapper[4880]: I1201 04:31:40.810267 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:31:40 crc kubenswrapper[4880]: E1201 04:31:40.811205 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.598196 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9f7jv"] Dec 01 04:31:51 crc kubenswrapper[4880]: E1201 04:31:51.598943 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerName="neutron-api" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.598956 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerName="neutron-api" Dec 01 04:31:51 crc kubenswrapper[4880]: E1201 04:31:51.598984 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerName="neutron-httpd" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.598990 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerName="neutron-httpd" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.599181 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerName="neutron-httpd" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.599205 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b132fb3-f361-4b31-a0b7-73af662a12a6" containerName="neutron-api" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.600457 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.626226 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9f7jv"] Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.672465 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvfwz\" (UniqueName: \"kubernetes.io/projected/ed7e8902-d88a-4b38-8043-9f08365a469b-kube-api-access-qvfwz\") pod \"redhat-operators-9f7jv\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.672541 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-catalog-content\") pod \"redhat-operators-9f7jv\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.672575 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-utilities\") pod \"redhat-operators-9f7jv\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.774337 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvfwz\" (UniqueName: \"kubernetes.io/projected/ed7e8902-d88a-4b38-8043-9f08365a469b-kube-api-access-qvfwz\") pod \"redhat-operators-9f7jv\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.774434 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-catalog-content\") pod \"redhat-operators-9f7jv\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.774486 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-utilities\") pod \"redhat-operators-9f7jv\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.775243 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-utilities\") pod \"redhat-operators-9f7jv\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.776619 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-catalog-content\") pod \"redhat-operators-9f7jv\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.811327 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvfwz\" (UniqueName: \"kubernetes.io/projected/ed7e8902-d88a-4b38-8043-9f08365a469b-kube-api-access-qvfwz\") pod \"redhat-operators-9f7jv\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:51 crc kubenswrapper[4880]: I1201 04:31:51.927536 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:31:52 crc kubenswrapper[4880]: I1201 04:31:52.398319 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9f7jv"] Dec 01 04:31:53 crc kubenswrapper[4880]: I1201 04:31:53.130091 4880 generic.go:334] "Generic (PLEG): container finished" podID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerID="d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025" exitCode=0 Dec 01 04:31:53 crc kubenswrapper[4880]: I1201 04:31:53.130143 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f7jv" event={"ID":"ed7e8902-d88a-4b38-8043-9f08365a469b","Type":"ContainerDied","Data":"d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025"} Dec 01 04:31:53 crc kubenswrapper[4880]: I1201 04:31:53.130169 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f7jv" event={"ID":"ed7e8902-d88a-4b38-8043-9f08365a469b","Type":"ContainerStarted","Data":"2004b559ce6dcb5feefd05cb295a3ab113a6bfbfe66c78ca5b0ef7803513b986"} Dec 01 04:31:53 crc kubenswrapper[4880]: I1201 04:31:53.783696 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:31:53 crc kubenswrapper[4880]: E1201 04:31:53.784176 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:31:54 crc kubenswrapper[4880]: I1201 04:31:54.140971 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f7jv" event={"ID":"ed7e8902-d88a-4b38-8043-9f08365a469b","Type":"ContainerStarted","Data":"878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186"} Dec 01 04:31:57 crc kubenswrapper[4880]: I1201 04:31:57.167125 4880 generic.go:334] "Generic (PLEG): container finished" podID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerID="878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186" exitCode=0 Dec 01 04:31:57 crc kubenswrapper[4880]: I1201 04:31:57.167213 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f7jv" event={"ID":"ed7e8902-d88a-4b38-8043-9f08365a469b","Type":"ContainerDied","Data":"878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186"} Dec 01 04:31:58 crc kubenswrapper[4880]: I1201 04:31:58.178524 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f7jv" event={"ID":"ed7e8902-d88a-4b38-8043-9f08365a469b","Type":"ContainerStarted","Data":"0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac"} Dec 01 04:31:58 crc kubenswrapper[4880]: I1201 04:31:58.208634 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9f7jv" podStartSLOduration=2.722002841 podStartE2EDuration="7.208603848s" podCreationTimestamp="2025-12-01 04:31:51 +0000 UTC" firstStartedPulling="2025-12-01 04:31:53.132174581 +0000 UTC m=+5742.643428953" lastFinishedPulling="2025-12-01 04:31:57.618775548 +0000 UTC m=+5747.130029960" observedRunningTime="2025-12-01 04:31:58.202083598 +0000 UTC m=+5747.713337980" watchObservedRunningTime="2025-12-01 04:31:58.208603848 +0000 UTC m=+5747.719858230" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.492002 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nkjqq"] Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.496136 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.496345 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkjqq"] Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.528666 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-utilities\") pod \"redhat-marketplace-nkjqq\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.528735 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtksh\" (UniqueName: \"kubernetes.io/projected/6881be99-0ad5-4439-b7e0-790449b01a97-kube-api-access-rtksh\") pod \"redhat-marketplace-nkjqq\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.528765 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-catalog-content\") pod \"redhat-marketplace-nkjqq\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.631934 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-utilities\") pod \"redhat-marketplace-nkjqq\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.632175 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtksh\" (UniqueName: \"kubernetes.io/projected/6881be99-0ad5-4439-b7e0-790449b01a97-kube-api-access-rtksh\") pod \"redhat-marketplace-nkjqq\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.632262 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-catalog-content\") pod \"redhat-marketplace-nkjqq\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.632462 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-utilities\") pod \"redhat-marketplace-nkjqq\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.632829 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-catalog-content\") pod \"redhat-marketplace-nkjqq\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.663096 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtksh\" (UniqueName: \"kubernetes.io/projected/6881be99-0ad5-4439-b7e0-790449b01a97-kube-api-access-rtksh\") pod \"redhat-marketplace-nkjqq\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:31:59 crc kubenswrapper[4880]: I1201 04:31:59.841592 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:32:00 crc kubenswrapper[4880]: I1201 04:32:00.317029 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkjqq"] Dec 01 04:32:01 crc kubenswrapper[4880]: I1201 04:32:01.207492 4880 generic.go:334] "Generic (PLEG): container finished" podID="6881be99-0ad5-4439-b7e0-790449b01a97" containerID="ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19" exitCode=0 Dec 01 04:32:01 crc kubenswrapper[4880]: I1201 04:32:01.207552 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkjqq" event={"ID":"6881be99-0ad5-4439-b7e0-790449b01a97","Type":"ContainerDied","Data":"ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19"} Dec 01 04:32:01 crc kubenswrapper[4880]: I1201 04:32:01.207754 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkjqq" event={"ID":"6881be99-0ad5-4439-b7e0-790449b01a97","Type":"ContainerStarted","Data":"13cd20845c551b035871bb7fe1fb67fb1a45bcf07384aff5d7219dd4d3fd0fcb"} Dec 01 04:32:01 crc kubenswrapper[4880]: I1201 04:32:01.928478 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:32:01 crc kubenswrapper[4880]: I1201 04:32:01.928798 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:32:02 crc kubenswrapper[4880]: I1201 04:32:02.985629 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9f7jv" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="registry-server" probeResult="failure" output=< Dec 01 04:32:02 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:32:02 crc kubenswrapper[4880]: > Dec 01 04:32:03 crc kubenswrapper[4880]: I1201 04:32:03.246336 4880 generic.go:334] "Generic (PLEG): container finished" podID="6881be99-0ad5-4439-b7e0-790449b01a97" containerID="57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad" exitCode=0 Dec 01 04:32:03 crc kubenswrapper[4880]: I1201 04:32:03.246394 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkjqq" event={"ID":"6881be99-0ad5-4439-b7e0-790449b01a97","Type":"ContainerDied","Data":"57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad"} Dec 01 04:32:04 crc kubenswrapper[4880]: I1201 04:32:04.266629 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkjqq" event={"ID":"6881be99-0ad5-4439-b7e0-790449b01a97","Type":"ContainerStarted","Data":"17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7"} Dec 01 04:32:04 crc kubenswrapper[4880]: I1201 04:32:04.287335 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nkjqq" podStartSLOduration=2.841026679 podStartE2EDuration="5.287320867s" podCreationTimestamp="2025-12-01 04:31:59 +0000 UTC" firstStartedPulling="2025-12-01 04:32:01.209238348 +0000 UTC m=+5750.720492720" lastFinishedPulling="2025-12-01 04:32:03.655532536 +0000 UTC m=+5753.166786908" observedRunningTime="2025-12-01 04:32:04.284702993 +0000 UTC m=+5753.795957405" watchObservedRunningTime="2025-12-01 04:32:04.287320867 +0000 UTC m=+5753.798575239" Dec 01 04:32:07 crc kubenswrapper[4880]: I1201 04:32:07.784273 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:32:07 crc kubenswrapper[4880]: E1201 04:32:07.784794 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:32:09 crc kubenswrapper[4880]: I1201 04:32:09.841842 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:32:09 crc kubenswrapper[4880]: I1201 04:32:09.842235 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:32:09 crc kubenswrapper[4880]: I1201 04:32:09.919601 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:32:10 crc kubenswrapper[4880]: I1201 04:32:10.384599 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:32:10 crc kubenswrapper[4880]: I1201 04:32:10.444701 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkjqq"] Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.338605 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nkjqq" podUID="6881be99-0ad5-4439-b7e0-790449b01a97" containerName="registry-server" containerID="cri-o://17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7" gracePeriod=2 Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.859324 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.895605 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-utilities\") pod \"6881be99-0ad5-4439-b7e0-790449b01a97\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.895700 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-catalog-content\") pod \"6881be99-0ad5-4439-b7e0-790449b01a97\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.895752 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtksh\" (UniqueName: \"kubernetes.io/projected/6881be99-0ad5-4439-b7e0-790449b01a97-kube-api-access-rtksh\") pod \"6881be99-0ad5-4439-b7e0-790449b01a97\" (UID: \"6881be99-0ad5-4439-b7e0-790449b01a97\") " Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.899903 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-utilities" (OuterVolumeSpecName: "utilities") pod "6881be99-0ad5-4439-b7e0-790449b01a97" (UID: "6881be99-0ad5-4439-b7e0-790449b01a97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.920890 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6881be99-0ad5-4439-b7e0-790449b01a97" (UID: "6881be99-0ad5-4439-b7e0-790449b01a97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.939186 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6881be99-0ad5-4439-b7e0-790449b01a97-kube-api-access-rtksh" (OuterVolumeSpecName: "kube-api-access-rtksh") pod "6881be99-0ad5-4439-b7e0-790449b01a97" (UID: "6881be99-0ad5-4439-b7e0-790449b01a97"). InnerVolumeSpecName "kube-api-access-rtksh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.992137 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9f7jv" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="registry-server" probeResult="failure" output=< Dec 01 04:32:12 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:32:12 crc kubenswrapper[4880]: > Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.998036 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.998072 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6881be99-0ad5-4439-b7e0-790449b01a97-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:32:12 crc kubenswrapper[4880]: I1201 04:32:12.998085 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtksh\" (UniqueName: \"kubernetes.io/projected/6881be99-0ad5-4439-b7e0-790449b01a97-kube-api-access-rtksh\") on node \"crc\" DevicePath \"\"" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.350233 4880 generic.go:334] "Generic (PLEG): container finished" podID="6881be99-0ad5-4439-b7e0-790449b01a97" containerID="17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7" exitCode=0 Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.350286 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkjqq" event={"ID":"6881be99-0ad5-4439-b7e0-790449b01a97","Type":"ContainerDied","Data":"17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7"} Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.350330 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkjqq" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.350372 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkjqq" event={"ID":"6881be99-0ad5-4439-b7e0-790449b01a97","Type":"ContainerDied","Data":"13cd20845c551b035871bb7fe1fb67fb1a45bcf07384aff5d7219dd4d3fd0fcb"} Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.350397 4880 scope.go:117] "RemoveContainer" containerID="17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.381049 4880 scope.go:117] "RemoveContainer" containerID="57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.404785 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkjqq"] Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.406889 4880 scope.go:117] "RemoveContainer" containerID="ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.416414 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkjqq"] Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.456279 4880 scope.go:117] "RemoveContainer" containerID="17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7" Dec 01 04:32:13 crc kubenswrapper[4880]: E1201 04:32:13.456769 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7\": container with ID starting with 17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7 not found: ID does not exist" containerID="17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.456807 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7"} err="failed to get container status \"17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7\": rpc error: code = NotFound desc = could not find container \"17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7\": container with ID starting with 17078a334381ccd2fee9a323f57392243420871e5d52040f3dd1ef20494a72c7 not found: ID does not exist" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.456836 4880 scope.go:117] "RemoveContainer" containerID="57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad" Dec 01 04:32:13 crc kubenswrapper[4880]: E1201 04:32:13.457290 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad\": container with ID starting with 57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad not found: ID does not exist" containerID="57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.457332 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad"} err="failed to get container status \"57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad\": rpc error: code = NotFound desc = could not find container \"57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad\": container with ID starting with 57f68933e942a00c621b01dcfeed99e067213823dab8028db15a15db101a60ad not found: ID does not exist" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.457359 4880 scope.go:117] "RemoveContainer" containerID="ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19" Dec 01 04:32:13 crc kubenswrapper[4880]: E1201 04:32:13.457681 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19\": container with ID starting with ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19 not found: ID does not exist" containerID="ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19" Dec 01 04:32:13 crc kubenswrapper[4880]: I1201 04:32:13.457715 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19"} err="failed to get container status \"ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19\": rpc error: code = NotFound desc = could not find container \"ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19\": container with ID starting with ecfc270859efe879d49be103447c429ec556011185a138b2876cddc33971ac19 not found: ID does not exist" Dec 01 04:32:14 crc kubenswrapper[4880]: I1201 04:32:14.798748 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6881be99-0ad5-4439-b7e0-790449b01a97" path="/var/lib/kubelet/pods/6881be99-0ad5-4439-b7e0-790449b01a97/volumes" Dec 01 04:32:18 crc kubenswrapper[4880]: I1201 04:32:18.783924 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:32:18 crc kubenswrapper[4880]: E1201 04:32:18.785055 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:32:21 crc kubenswrapper[4880]: I1201 04:32:21.976687 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:32:22 crc kubenswrapper[4880]: I1201 04:32:22.025374 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:32:22 crc kubenswrapper[4880]: I1201 04:32:22.808914 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9f7jv"] Dec 01 04:32:23 crc kubenswrapper[4880]: I1201 04:32:23.463342 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9f7jv" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="registry-server" containerID="cri-o://0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac" gracePeriod=2 Dec 01 04:32:23 crc kubenswrapper[4880]: I1201 04:32:23.982742 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.012075 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-catalog-content\") pod \"ed7e8902-d88a-4b38-8043-9f08365a469b\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.012255 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvfwz\" (UniqueName: \"kubernetes.io/projected/ed7e8902-d88a-4b38-8043-9f08365a469b-kube-api-access-qvfwz\") pod \"ed7e8902-d88a-4b38-8043-9f08365a469b\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.012361 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-utilities\") pod \"ed7e8902-d88a-4b38-8043-9f08365a469b\" (UID: \"ed7e8902-d88a-4b38-8043-9f08365a469b\") " Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.019909 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7e8902-d88a-4b38-8043-9f08365a469b-kube-api-access-qvfwz" (OuterVolumeSpecName: "kube-api-access-qvfwz") pod "ed7e8902-d88a-4b38-8043-9f08365a469b" (UID: "ed7e8902-d88a-4b38-8043-9f08365a469b"). InnerVolumeSpecName "kube-api-access-qvfwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.030007 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-utilities" (OuterVolumeSpecName: "utilities") pod "ed7e8902-d88a-4b38-8043-9f08365a469b" (UID: "ed7e8902-d88a-4b38-8043-9f08365a469b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.114171 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvfwz\" (UniqueName: \"kubernetes.io/projected/ed7e8902-d88a-4b38-8043-9f08365a469b-kube-api-access-qvfwz\") on node \"crc\" DevicePath \"\"" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.114396 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.131230 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed7e8902-d88a-4b38-8043-9f08365a469b" (UID: "ed7e8902-d88a-4b38-8043-9f08365a469b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.216173 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7e8902-d88a-4b38-8043-9f08365a469b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.473371 4880 generic.go:334] "Generic (PLEG): container finished" podID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerID="0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac" exitCode=0 Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.473424 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9f7jv" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.473457 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f7jv" event={"ID":"ed7e8902-d88a-4b38-8043-9f08365a469b","Type":"ContainerDied","Data":"0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac"} Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.473951 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f7jv" event={"ID":"ed7e8902-d88a-4b38-8043-9f08365a469b","Type":"ContainerDied","Data":"2004b559ce6dcb5feefd05cb295a3ab113a6bfbfe66c78ca5b0ef7803513b986"} Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.473971 4880 scope.go:117] "RemoveContainer" containerID="0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.495108 4880 scope.go:117] "RemoveContainer" containerID="878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.530735 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9f7jv"] Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.540654 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9f7jv"] Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.554746 4880 scope.go:117] "RemoveContainer" containerID="d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.583151 4880 scope.go:117] "RemoveContainer" containerID="0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac" Dec 01 04:32:24 crc kubenswrapper[4880]: E1201 04:32:24.583486 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac\": container with ID starting with 0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac not found: ID does not exist" containerID="0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.583512 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac"} err="failed to get container status \"0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac\": rpc error: code = NotFound desc = could not find container \"0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac\": container with ID starting with 0b7160a0f0466254d3938a0b32c6f15e113ee4a4a9e04c05e150b9da667d8bac not found: ID does not exist" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.583531 4880 scope.go:117] "RemoveContainer" containerID="878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186" Dec 01 04:32:24 crc kubenswrapper[4880]: E1201 04:32:24.583812 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186\": container with ID starting with 878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186 not found: ID does not exist" containerID="878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.583832 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186"} err="failed to get container status \"878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186\": rpc error: code = NotFound desc = could not find container \"878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186\": container with ID starting with 878dfea28d0e1985fd8ae9c97ad3f4ca362db34d33af9b6d50f9b9ec14a2f186 not found: ID does not exist" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.583844 4880 scope.go:117] "RemoveContainer" containerID="d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025" Dec 01 04:32:24 crc kubenswrapper[4880]: E1201 04:32:24.584210 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025\": container with ID starting with d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025 not found: ID does not exist" containerID="d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.584231 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025"} err="failed to get container status \"d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025\": rpc error: code = NotFound desc = could not find container \"d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025\": container with ID starting with d756fb221279987af1bd42282bb525f21a333aa8ffc2d67a153b220bb43c8025 not found: ID does not exist" Dec 01 04:32:24 crc kubenswrapper[4880]: I1201 04:32:24.795880 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" path="/var/lib/kubelet/pods/ed7e8902-d88a-4b38-8043-9f08365a469b/volumes" Dec 01 04:32:29 crc kubenswrapper[4880]: I1201 04:32:29.783976 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:32:29 crc kubenswrapper[4880]: E1201 04:32:29.784728 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:32:44 crc kubenswrapper[4880]: I1201 04:32:44.797315 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:32:44 crc kubenswrapper[4880]: E1201 04:32:44.800856 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:32:58 crc kubenswrapper[4880]: I1201 04:32:58.785956 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:32:58 crc kubenswrapper[4880]: E1201 04:32:58.786911 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:33:11 crc kubenswrapper[4880]: I1201 04:33:11.784829 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:33:11 crc kubenswrapper[4880]: E1201 04:33:11.786224 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:33:25 crc kubenswrapper[4880]: I1201 04:33:25.784540 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:33:25 crc kubenswrapper[4880]: E1201 04:33:25.785648 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:33:38 crc kubenswrapper[4880]: I1201 04:33:38.784422 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:33:38 crc kubenswrapper[4880]: E1201 04:33:38.785831 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:33:51 crc kubenswrapper[4880]: I1201 04:33:51.784527 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:33:51 crc kubenswrapper[4880]: E1201 04:33:51.785785 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:34:06 crc kubenswrapper[4880]: I1201 04:34:06.784066 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:34:06 crc kubenswrapper[4880]: E1201 04:34:06.785934 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:34:18 crc kubenswrapper[4880]: I1201 04:34:18.786049 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:34:18 crc kubenswrapper[4880]: E1201 04:34:18.788939 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:34:32 crc kubenswrapper[4880]: I1201 04:34:32.784637 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:34:32 crc kubenswrapper[4880]: E1201 04:34:32.785611 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:34:45 crc kubenswrapper[4880]: I1201 04:34:45.783658 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:34:45 crc kubenswrapper[4880]: E1201 04:34:45.786286 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:34:56 crc kubenswrapper[4880]: I1201 04:34:56.784581 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:34:56 crc kubenswrapper[4880]: E1201 04:34:56.785439 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:35:07 crc kubenswrapper[4880]: I1201 04:35:07.784709 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:35:07 crc kubenswrapper[4880]: E1201 04:35:07.785610 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:35:19 crc kubenswrapper[4880]: I1201 04:35:19.784816 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:35:19 crc kubenswrapper[4880]: E1201 04:35:19.785808 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.433890 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qgp8b"] Dec 01 04:35:28 crc kubenswrapper[4880]: E1201 04:35:28.434738 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="registry-server" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.434750 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="registry-server" Dec 01 04:35:28 crc kubenswrapper[4880]: E1201 04:35:28.434767 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6881be99-0ad5-4439-b7e0-790449b01a97" containerName="extract-utilities" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.434773 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6881be99-0ad5-4439-b7e0-790449b01a97" containerName="extract-utilities" Dec 01 04:35:28 crc kubenswrapper[4880]: E1201 04:35:28.434803 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="extract-content" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.434809 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="extract-content" Dec 01 04:35:28 crc kubenswrapper[4880]: E1201 04:35:28.434819 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6881be99-0ad5-4439-b7e0-790449b01a97" containerName="registry-server" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.434826 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6881be99-0ad5-4439-b7e0-790449b01a97" containerName="registry-server" Dec 01 04:35:28 crc kubenswrapper[4880]: E1201 04:35:28.434837 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6881be99-0ad5-4439-b7e0-790449b01a97" containerName="extract-content" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.434842 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="6881be99-0ad5-4439-b7e0-790449b01a97" containerName="extract-content" Dec 01 04:35:28 crc kubenswrapper[4880]: E1201 04:35:28.434854 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="extract-utilities" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.434861 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="extract-utilities" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.435048 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="6881be99-0ad5-4439-b7e0-790449b01a97" containerName="registry-server" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.435063 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7e8902-d88a-4b38-8043-9f08365a469b" containerName="registry-server" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.436347 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.459526 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qgp8b"] Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.488850 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-utilities\") pod \"certified-operators-qgp8b\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.489000 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6969\" (UniqueName: \"kubernetes.io/projected/db1ab242-03de-47a7-8253-2127ee700376-kube-api-access-h6969\") pod \"certified-operators-qgp8b\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.489171 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-catalog-content\") pod \"certified-operators-qgp8b\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.591447 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-utilities\") pod \"certified-operators-qgp8b\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.591807 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6969\" (UniqueName: \"kubernetes.io/projected/db1ab242-03de-47a7-8253-2127ee700376-kube-api-access-h6969\") pod \"certified-operators-qgp8b\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.592000 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-catalog-content\") pod \"certified-operators-qgp8b\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.592213 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-utilities\") pod \"certified-operators-qgp8b\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.592490 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-catalog-content\") pod \"certified-operators-qgp8b\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.615129 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6969\" (UniqueName: \"kubernetes.io/projected/db1ab242-03de-47a7-8253-2127ee700376-kube-api-access-h6969\") pod \"certified-operators-qgp8b\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:28 crc kubenswrapper[4880]: I1201 04:35:28.760306 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:29 crc kubenswrapper[4880]: I1201 04:35:29.257420 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qgp8b"] Dec 01 04:35:29 crc kubenswrapper[4880]: I1201 04:35:29.522987 4880 generic.go:334] "Generic (PLEG): container finished" podID="db1ab242-03de-47a7-8253-2127ee700376" containerID="8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74" exitCode=0 Dec 01 04:35:29 crc kubenswrapper[4880]: I1201 04:35:29.523056 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgp8b" event={"ID":"db1ab242-03de-47a7-8253-2127ee700376","Type":"ContainerDied","Data":"8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74"} Dec 01 04:35:29 crc kubenswrapper[4880]: I1201 04:35:29.523097 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgp8b" event={"ID":"db1ab242-03de-47a7-8253-2127ee700376","Type":"ContainerStarted","Data":"1a78f8d219e25598a0aa8fbcc4320d16839a0edca22661d2be01e6d78d7b543c"} Dec 01 04:35:29 crc kubenswrapper[4880]: I1201 04:35:29.525206 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 04:35:30 crc kubenswrapper[4880]: I1201 04:35:30.546386 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgp8b" event={"ID":"db1ab242-03de-47a7-8253-2127ee700376","Type":"ContainerStarted","Data":"8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c"} Dec 01 04:35:32 crc kubenswrapper[4880]: I1201 04:35:32.564362 4880 generic.go:334] "Generic (PLEG): container finished" podID="db1ab242-03de-47a7-8253-2127ee700376" containerID="8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c" exitCode=0 Dec 01 04:35:32 crc kubenswrapper[4880]: I1201 04:35:32.564538 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgp8b" event={"ID":"db1ab242-03de-47a7-8253-2127ee700376","Type":"ContainerDied","Data":"8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c"} Dec 01 04:35:33 crc kubenswrapper[4880]: I1201 04:35:33.578623 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgp8b" event={"ID":"db1ab242-03de-47a7-8253-2127ee700376","Type":"ContainerStarted","Data":"14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753"} Dec 01 04:35:33 crc kubenswrapper[4880]: I1201 04:35:33.612907 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qgp8b" podStartSLOduration=2.049262738 podStartE2EDuration="5.612853319s" podCreationTimestamp="2025-12-01 04:35:28 +0000 UTC" firstStartedPulling="2025-12-01 04:35:29.524918703 +0000 UTC m=+5959.036173075" lastFinishedPulling="2025-12-01 04:35:33.088509284 +0000 UTC m=+5962.599763656" observedRunningTime="2025-12-01 04:35:33.598502697 +0000 UTC m=+5963.109757079" watchObservedRunningTime="2025-12-01 04:35:33.612853319 +0000 UTC m=+5963.124107691" Dec 01 04:35:34 crc kubenswrapper[4880]: I1201 04:35:34.784024 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:35:34 crc kubenswrapper[4880]: E1201 04:35:34.784538 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:35:38 crc kubenswrapper[4880]: I1201 04:35:38.760795 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:38 crc kubenswrapper[4880]: I1201 04:35:38.761364 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:38 crc kubenswrapper[4880]: I1201 04:35:38.822009 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:39 crc kubenswrapper[4880]: I1201 04:35:39.714053 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:39 crc kubenswrapper[4880]: I1201 04:35:39.765531 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qgp8b"] Dec 01 04:35:41 crc kubenswrapper[4880]: I1201 04:35:41.680290 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qgp8b" podUID="db1ab242-03de-47a7-8253-2127ee700376" containerName="registry-server" containerID="cri-o://14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753" gracePeriod=2 Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.350460 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.472831 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-catalog-content\") pod \"db1ab242-03de-47a7-8253-2127ee700376\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.473383 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-utilities\") pod \"db1ab242-03de-47a7-8253-2127ee700376\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.473590 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6969\" (UniqueName: \"kubernetes.io/projected/db1ab242-03de-47a7-8253-2127ee700376-kube-api-access-h6969\") pod \"db1ab242-03de-47a7-8253-2127ee700376\" (UID: \"db1ab242-03de-47a7-8253-2127ee700376\") " Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.474477 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-utilities" (OuterVolumeSpecName: "utilities") pod "db1ab242-03de-47a7-8253-2127ee700376" (UID: "db1ab242-03de-47a7-8253-2127ee700376"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.479423 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1ab242-03de-47a7-8253-2127ee700376-kube-api-access-h6969" (OuterVolumeSpecName: "kube-api-access-h6969") pod "db1ab242-03de-47a7-8253-2127ee700376" (UID: "db1ab242-03de-47a7-8253-2127ee700376"). InnerVolumeSpecName "kube-api-access-h6969". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.540686 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db1ab242-03de-47a7-8253-2127ee700376" (UID: "db1ab242-03de-47a7-8253-2127ee700376"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.575599 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6969\" (UniqueName: \"kubernetes.io/projected/db1ab242-03de-47a7-8253-2127ee700376-kube-api-access-h6969\") on node \"crc\" DevicePath \"\"" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.575630 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.575641 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db1ab242-03de-47a7-8253-2127ee700376-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.693142 4880 generic.go:334] "Generic (PLEG): container finished" podID="db1ab242-03de-47a7-8253-2127ee700376" containerID="14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753" exitCode=0 Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.693185 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgp8b" event={"ID":"db1ab242-03de-47a7-8253-2127ee700376","Type":"ContainerDied","Data":"14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753"} Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.693207 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgp8b" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.693232 4880 scope.go:117] "RemoveContainer" containerID="14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.693220 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgp8b" event={"ID":"db1ab242-03de-47a7-8253-2127ee700376","Type":"ContainerDied","Data":"1a78f8d219e25598a0aa8fbcc4320d16839a0edca22661d2be01e6d78d7b543c"} Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.721199 4880 scope.go:117] "RemoveContainer" containerID="8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.728996 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qgp8b"] Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.736509 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qgp8b"] Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.753675 4880 scope.go:117] "RemoveContainer" containerID="8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.797063 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db1ab242-03de-47a7-8253-2127ee700376" path="/var/lib/kubelet/pods/db1ab242-03de-47a7-8253-2127ee700376/volumes" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.803195 4880 scope.go:117] "RemoveContainer" containerID="14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753" Dec 01 04:35:42 crc kubenswrapper[4880]: E1201 04:35:42.803631 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753\": container with ID starting with 14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753 not found: ID does not exist" containerID="14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.803686 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753"} err="failed to get container status \"14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753\": rpc error: code = NotFound desc = could not find container \"14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753\": container with ID starting with 14aafac07de9b3843b9fe5ba4acb3284ddeaa2c4e72e5430ac17cb52805c3753 not found: ID does not exist" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.803716 4880 scope.go:117] "RemoveContainer" containerID="8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c" Dec 01 04:35:42 crc kubenswrapper[4880]: E1201 04:35:42.804309 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c\": container with ID starting with 8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c not found: ID does not exist" containerID="8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.804340 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c"} err="failed to get container status \"8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c\": rpc error: code = NotFound desc = could not find container \"8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c\": container with ID starting with 8b8f75c91aeb38237e33c3e582be5968e21ddb258ab9c96422045adbf940f26c not found: ID does not exist" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.804366 4880 scope.go:117] "RemoveContainer" containerID="8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74" Dec 01 04:35:42 crc kubenswrapper[4880]: E1201 04:35:42.804723 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74\": container with ID starting with 8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74 not found: ID does not exist" containerID="8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74" Dec 01 04:35:42 crc kubenswrapper[4880]: I1201 04:35:42.804767 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74"} err="failed to get container status \"8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74\": rpc error: code = NotFound desc = could not find container \"8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74\": container with ID starting with 8ab868cb89a4e89a7a21417c10c08980d051580cf504ddb2132af8a6a2b7da74 not found: ID does not exist" Dec 01 04:35:46 crc kubenswrapper[4880]: I1201 04:35:46.784534 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:35:46 crc kubenswrapper[4880]: E1201 04:35:46.785509 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:35:59 crc kubenswrapper[4880]: I1201 04:35:59.784346 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:36:00 crc kubenswrapper[4880]: I1201 04:36:00.883351 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"460cdcb8b2b54c6d596acb11bfa0b4abded514a08884a82465344b40a80e60ac"} Dec 01 04:38:17 crc kubenswrapper[4880]: I1201 04:38:17.368410 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:38:17 crc kubenswrapper[4880]: I1201 04:38:17.369005 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:38:47 crc kubenswrapper[4880]: I1201 04:38:47.369561 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:38:47 crc kubenswrapper[4880]: I1201 04:38:47.370279 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:39:17 crc kubenswrapper[4880]: I1201 04:39:17.368969 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:39:17 crc kubenswrapper[4880]: I1201 04:39:17.369620 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:39:17 crc kubenswrapper[4880]: I1201 04:39:17.369693 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:39:17 crc kubenswrapper[4880]: I1201 04:39:17.370835 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"460cdcb8b2b54c6d596acb11bfa0b4abded514a08884a82465344b40a80e60ac"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:39:17 crc kubenswrapper[4880]: I1201 04:39:17.370983 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://460cdcb8b2b54c6d596acb11bfa0b4abded514a08884a82465344b40a80e60ac" gracePeriod=600 Dec 01 04:39:18 crc kubenswrapper[4880]: I1201 04:39:18.065909 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="460cdcb8b2b54c6d596acb11bfa0b4abded514a08884a82465344b40a80e60ac" exitCode=0 Dec 01 04:39:18 crc kubenswrapper[4880]: I1201 04:39:18.065970 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"460cdcb8b2b54c6d596acb11bfa0b4abded514a08884a82465344b40a80e60ac"} Dec 01 04:39:18 crc kubenswrapper[4880]: I1201 04:39:18.066427 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6"} Dec 01 04:39:18 crc kubenswrapper[4880]: I1201 04:39:18.066455 4880 scope.go:117] "RemoveContainer" containerID="d7a54bb59672f9b367a2713a16b7172fe654baebc31985533cbf426f88fb916e" Dec 01 04:41:17 crc kubenswrapper[4880]: I1201 04:41:17.369498 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:41:17 crc kubenswrapper[4880]: I1201 04:41:17.370175 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:41:47 crc kubenswrapper[4880]: I1201 04:41:47.369209 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:41:47 crc kubenswrapper[4880]: I1201 04:41:47.369699 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:42:17 crc kubenswrapper[4880]: I1201 04:42:17.369122 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:42:17 crc kubenswrapper[4880]: I1201 04:42:17.369569 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:42:17 crc kubenswrapper[4880]: I1201 04:42:17.369608 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:42:17 crc kubenswrapper[4880]: I1201 04:42:17.370068 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:42:17 crc kubenswrapper[4880]: I1201 04:42:17.370118 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" gracePeriod=600 Dec 01 04:42:17 crc kubenswrapper[4880]: E1201 04:42:17.494227 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:42:18 crc kubenswrapper[4880]: I1201 04:42:18.002743 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" exitCode=0 Dec 01 04:42:18 crc kubenswrapper[4880]: I1201 04:42:18.002811 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6"} Dec 01 04:42:18 crc kubenswrapper[4880]: I1201 04:42:18.002898 4880 scope.go:117] "RemoveContainer" containerID="460cdcb8b2b54c6d596acb11bfa0b4abded514a08884a82465344b40a80e60ac" Dec 01 04:42:18 crc kubenswrapper[4880]: I1201 04:42:18.004658 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:42:18 crc kubenswrapper[4880]: E1201 04:42:18.005695 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.795446 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j6z6h"] Dec 01 04:42:27 crc kubenswrapper[4880]: E1201 04:42:27.796358 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1ab242-03de-47a7-8253-2127ee700376" containerName="extract-content" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.796374 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1ab242-03de-47a7-8253-2127ee700376" containerName="extract-content" Dec 01 04:42:27 crc kubenswrapper[4880]: E1201 04:42:27.796409 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1ab242-03de-47a7-8253-2127ee700376" containerName="extract-utilities" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.796417 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1ab242-03de-47a7-8253-2127ee700376" containerName="extract-utilities" Dec 01 04:42:27 crc kubenswrapper[4880]: E1201 04:42:27.796436 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1ab242-03de-47a7-8253-2127ee700376" containerName="registry-server" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.796445 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1ab242-03de-47a7-8253-2127ee700376" containerName="registry-server" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.796663 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1ab242-03de-47a7-8253-2127ee700376" containerName="registry-server" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.798292 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.823147 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6z6h"] Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.873254 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kggmz\" (UniqueName: \"kubernetes.io/projected/02e4005d-90f5-4dce-975d-9fbba318c495-kube-api-access-kggmz\") pod \"redhat-operators-j6z6h\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.873543 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-catalog-content\") pod \"redhat-operators-j6z6h\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.873741 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-utilities\") pod \"redhat-operators-j6z6h\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.974662 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-utilities\") pod \"redhat-operators-j6z6h\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.974773 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kggmz\" (UniqueName: \"kubernetes.io/projected/02e4005d-90f5-4dce-975d-9fbba318c495-kube-api-access-kggmz\") pod \"redhat-operators-j6z6h\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.974823 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-catalog-content\") pod \"redhat-operators-j6z6h\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.975196 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-utilities\") pod \"redhat-operators-j6z6h\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.975211 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-catalog-content\") pod \"redhat-operators-j6z6h\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:27 crc kubenswrapper[4880]: I1201 04:42:27.993290 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kggmz\" (UniqueName: \"kubernetes.io/projected/02e4005d-90f5-4dce-975d-9fbba318c495-kube-api-access-kggmz\") pod \"redhat-operators-j6z6h\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:28 crc kubenswrapper[4880]: I1201 04:42:28.140749 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:28 crc kubenswrapper[4880]: I1201 04:42:28.647634 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6z6h"] Dec 01 04:42:29 crc kubenswrapper[4880]: I1201 04:42:29.122993 4880 generic.go:334] "Generic (PLEG): container finished" podID="02e4005d-90f5-4dce-975d-9fbba318c495" containerID="91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9" exitCode=0 Dec 01 04:42:29 crc kubenswrapper[4880]: I1201 04:42:29.123036 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6z6h" event={"ID":"02e4005d-90f5-4dce-975d-9fbba318c495","Type":"ContainerDied","Data":"91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9"} Dec 01 04:42:29 crc kubenswrapper[4880]: I1201 04:42:29.123063 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6z6h" event={"ID":"02e4005d-90f5-4dce-975d-9fbba318c495","Type":"ContainerStarted","Data":"1d0501e09cbf0ecf905ec8a0cd40506467725fea5b9a88057f355b1653e565be"} Dec 01 04:42:29 crc kubenswrapper[4880]: I1201 04:42:29.126712 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 04:42:31 crc kubenswrapper[4880]: I1201 04:42:31.143849 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6z6h" event={"ID":"02e4005d-90f5-4dce-975d-9fbba318c495","Type":"ContainerStarted","Data":"dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515"} Dec 01 04:42:32 crc kubenswrapper[4880]: I1201 04:42:32.784514 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:42:32 crc kubenswrapper[4880]: E1201 04:42:32.786173 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:42:33 crc kubenswrapper[4880]: I1201 04:42:33.170241 4880 generic.go:334] "Generic (PLEG): container finished" podID="02e4005d-90f5-4dce-975d-9fbba318c495" containerID="dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515" exitCode=0 Dec 01 04:42:33 crc kubenswrapper[4880]: I1201 04:42:33.170293 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6z6h" event={"ID":"02e4005d-90f5-4dce-975d-9fbba318c495","Type":"ContainerDied","Data":"dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515"} Dec 01 04:42:34 crc kubenswrapper[4880]: I1201 04:42:34.184558 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6z6h" event={"ID":"02e4005d-90f5-4dce-975d-9fbba318c495","Type":"ContainerStarted","Data":"7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604"} Dec 01 04:42:34 crc kubenswrapper[4880]: I1201 04:42:34.219153 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j6z6h" podStartSLOduration=2.688608952 podStartE2EDuration="7.219127749s" podCreationTimestamp="2025-12-01 04:42:27 +0000 UTC" firstStartedPulling="2025-12-01 04:42:29.126394337 +0000 UTC m=+6378.637648699" lastFinishedPulling="2025-12-01 04:42:33.656913094 +0000 UTC m=+6383.168167496" observedRunningTime="2025-12-01 04:42:34.210786725 +0000 UTC m=+6383.722041107" watchObservedRunningTime="2025-12-01 04:42:34.219127749 +0000 UTC m=+6383.730382131" Dec 01 04:42:38 crc kubenswrapper[4880]: I1201 04:42:38.141009 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:38 crc kubenswrapper[4880]: I1201 04:42:38.141422 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:39 crc kubenswrapper[4880]: I1201 04:42:39.204032 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j6z6h" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="registry-server" probeResult="failure" output=< Dec 01 04:42:39 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:42:39 crc kubenswrapper[4880]: > Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.584004 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-54z6q"] Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.586230 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.621234 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54z6q"] Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.673831 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-utilities\") pod \"redhat-marketplace-54z6q\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.674112 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-catalog-content\") pod \"redhat-marketplace-54z6q\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.674184 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4hlh\" (UniqueName: \"kubernetes.io/projected/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-kube-api-access-q4hlh\") pod \"redhat-marketplace-54z6q\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.775405 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-catalog-content\") pod \"redhat-marketplace-54z6q\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.775461 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4hlh\" (UniqueName: \"kubernetes.io/projected/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-kube-api-access-q4hlh\") pod \"redhat-marketplace-54z6q\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.775600 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-utilities\") pod \"redhat-marketplace-54z6q\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.776057 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-utilities\") pod \"redhat-marketplace-54z6q\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.776087 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-catalog-content\") pod \"redhat-marketplace-54z6q\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.802240 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4hlh\" (UniqueName: \"kubernetes.io/projected/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-kube-api-access-q4hlh\") pod \"redhat-marketplace-54z6q\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:41 crc kubenswrapper[4880]: I1201 04:42:41.905967 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:42 crc kubenswrapper[4880]: I1201 04:42:42.458277 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54z6q"] Dec 01 04:42:43 crc kubenswrapper[4880]: I1201 04:42:43.310701 4880 generic.go:334] "Generic (PLEG): container finished" podID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerID="2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142" exitCode=0 Dec 01 04:42:43 crc kubenswrapper[4880]: I1201 04:42:43.310770 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54z6q" event={"ID":"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0","Type":"ContainerDied","Data":"2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142"} Dec 01 04:42:43 crc kubenswrapper[4880]: I1201 04:42:43.311074 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54z6q" event={"ID":"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0","Type":"ContainerStarted","Data":"6f7a280081c727bec23d1c2a6591885bc6fb37e941ffc5f7b1e332040f03ce11"} Dec 01 04:42:44 crc kubenswrapper[4880]: I1201 04:42:44.323710 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54z6q" event={"ID":"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0","Type":"ContainerStarted","Data":"fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f"} Dec 01 04:42:45 crc kubenswrapper[4880]: I1201 04:42:45.335250 4880 generic.go:334] "Generic (PLEG): container finished" podID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerID="fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f" exitCode=0 Dec 01 04:42:45 crc kubenswrapper[4880]: I1201 04:42:45.335306 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54z6q" event={"ID":"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0","Type":"ContainerDied","Data":"fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f"} Dec 01 04:42:47 crc kubenswrapper[4880]: I1201 04:42:47.355083 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54z6q" event={"ID":"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0","Type":"ContainerStarted","Data":"da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a"} Dec 01 04:42:47 crc kubenswrapper[4880]: I1201 04:42:47.401953 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-54z6q" podStartSLOduration=3.113998484 podStartE2EDuration="6.401928084s" podCreationTimestamp="2025-12-01 04:42:41 +0000 UTC" firstStartedPulling="2025-12-01 04:42:43.313855578 +0000 UTC m=+6392.825109990" lastFinishedPulling="2025-12-01 04:42:46.601785218 +0000 UTC m=+6396.113039590" observedRunningTime="2025-12-01 04:42:47.378198472 +0000 UTC m=+6396.889452844" watchObservedRunningTime="2025-12-01 04:42:47.401928084 +0000 UTC m=+6396.913182476" Dec 01 04:42:47 crc kubenswrapper[4880]: I1201 04:42:47.784383 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:42:47 crc kubenswrapper[4880]: E1201 04:42:47.784659 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:42:49 crc kubenswrapper[4880]: I1201 04:42:49.201343 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j6z6h" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="registry-server" probeResult="failure" output=< Dec 01 04:42:49 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:42:49 crc kubenswrapper[4880]: > Dec 01 04:42:51 crc kubenswrapper[4880]: I1201 04:42:51.906795 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:51 crc kubenswrapper[4880]: I1201 04:42:51.907905 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:51 crc kubenswrapper[4880]: I1201 04:42:51.980045 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:52 crc kubenswrapper[4880]: I1201 04:42:52.475862 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:52 crc kubenswrapper[4880]: I1201 04:42:52.525464 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-54z6q"] Dec 01 04:42:54 crc kubenswrapper[4880]: I1201 04:42:54.427191 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-54z6q" podUID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerName="registry-server" containerID="cri-o://da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a" gracePeriod=2 Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.025338 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.083560 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4hlh\" (UniqueName: \"kubernetes.io/projected/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-kube-api-access-q4hlh\") pod \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.084230 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-utilities\") pod \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.085379 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-utilities" (OuterVolumeSpecName: "utilities") pod "9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" (UID: "9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.095282 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-catalog-content\") pod \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\" (UID: \"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0\") " Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.096395 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.105033 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-kube-api-access-q4hlh" (OuterVolumeSpecName: "kube-api-access-q4hlh") pod "9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" (UID: "9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0"). InnerVolumeSpecName "kube-api-access-q4hlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.121146 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" (UID: "9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.197950 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.197978 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4hlh\" (UniqueName: \"kubernetes.io/projected/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0-kube-api-access-q4hlh\") on node \"crc\" DevicePath \"\"" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.443646 4880 generic.go:334] "Generic (PLEG): container finished" podID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerID="da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a" exitCode=0 Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.443717 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54z6q" event={"ID":"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0","Type":"ContainerDied","Data":"da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a"} Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.443798 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54z6q" event={"ID":"9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0","Type":"ContainerDied","Data":"6f7a280081c727bec23d1c2a6591885bc6fb37e941ffc5f7b1e332040f03ce11"} Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.443835 4880 scope.go:117] "RemoveContainer" containerID="da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.445777 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54z6q" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.481639 4880 scope.go:117] "RemoveContainer" containerID="fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.531150 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-54z6q"] Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.531274 4880 scope.go:117] "RemoveContainer" containerID="2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.560148 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-54z6q"] Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.586167 4880 scope.go:117] "RemoveContainer" containerID="da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a" Dec 01 04:42:55 crc kubenswrapper[4880]: E1201 04:42:55.587675 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a\": container with ID starting with da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a not found: ID does not exist" containerID="da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.587727 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a"} err="failed to get container status \"da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a\": rpc error: code = NotFound desc = could not find container \"da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a\": container with ID starting with da3d2640f76e50d1780b0615d777714f7af4e16b9e50865a26818fc396f7ed0a not found: ID does not exist" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.587756 4880 scope.go:117] "RemoveContainer" containerID="fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f" Dec 01 04:42:55 crc kubenswrapper[4880]: E1201 04:42:55.588233 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f\": container with ID starting with fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f not found: ID does not exist" containerID="fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.588257 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f"} err="failed to get container status \"fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f\": rpc error: code = NotFound desc = could not find container \"fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f\": container with ID starting with fed74d830d32f28ab79f16d014e630195411470b61c6c540dd668851395a304f not found: ID does not exist" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.588271 4880 scope.go:117] "RemoveContainer" containerID="2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142" Dec 01 04:42:55 crc kubenswrapper[4880]: E1201 04:42:55.588655 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142\": container with ID starting with 2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142 not found: ID does not exist" containerID="2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142" Dec 01 04:42:55 crc kubenswrapper[4880]: I1201 04:42:55.588676 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142"} err="failed to get container status \"2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142\": rpc error: code = NotFound desc = could not find container \"2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142\": container with ID starting with 2c20d2018b4b619065abd88391b95232ae864ce0af32a5185962efd5d8487142 not found: ID does not exist" Dec 01 04:42:56 crc kubenswrapper[4880]: I1201 04:42:56.797825 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" path="/var/lib/kubelet/pods/9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0/volumes" Dec 01 04:42:58 crc kubenswrapper[4880]: I1201 04:42:58.189381 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:58 crc kubenswrapper[4880]: I1201 04:42:58.255678 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:42:58 crc kubenswrapper[4880]: I1201 04:42:58.616087 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6z6h"] Dec 01 04:42:59 crc kubenswrapper[4880]: I1201 04:42:59.482927 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j6z6h" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="registry-server" containerID="cri-o://7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604" gracePeriod=2 Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.018514 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.099137 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-catalog-content\") pod \"02e4005d-90f5-4dce-975d-9fbba318c495\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.099365 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kggmz\" (UniqueName: \"kubernetes.io/projected/02e4005d-90f5-4dce-975d-9fbba318c495-kube-api-access-kggmz\") pod \"02e4005d-90f5-4dce-975d-9fbba318c495\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.099463 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-utilities\") pod \"02e4005d-90f5-4dce-975d-9fbba318c495\" (UID: \"02e4005d-90f5-4dce-975d-9fbba318c495\") " Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.099945 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-utilities" (OuterVolumeSpecName: "utilities") pod "02e4005d-90f5-4dce-975d-9fbba318c495" (UID: "02e4005d-90f5-4dce-975d-9fbba318c495"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.100472 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.116101 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e4005d-90f5-4dce-975d-9fbba318c495-kube-api-access-kggmz" (OuterVolumeSpecName: "kube-api-access-kggmz") pod "02e4005d-90f5-4dce-975d-9fbba318c495" (UID: "02e4005d-90f5-4dce-975d-9fbba318c495"). InnerVolumeSpecName "kube-api-access-kggmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.203220 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kggmz\" (UniqueName: \"kubernetes.io/projected/02e4005d-90f5-4dce-975d-9fbba318c495-kube-api-access-kggmz\") on node \"crc\" DevicePath \"\"" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.225847 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02e4005d-90f5-4dce-975d-9fbba318c495" (UID: "02e4005d-90f5-4dce-975d-9fbba318c495"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.304591 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e4005d-90f5-4dce-975d-9fbba318c495-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.493232 4880 generic.go:334] "Generic (PLEG): container finished" podID="02e4005d-90f5-4dce-975d-9fbba318c495" containerID="7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604" exitCode=0 Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.493267 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6z6h" event={"ID":"02e4005d-90f5-4dce-975d-9fbba318c495","Type":"ContainerDied","Data":"7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604"} Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.493291 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6z6h" event={"ID":"02e4005d-90f5-4dce-975d-9fbba318c495","Type":"ContainerDied","Data":"1d0501e09cbf0ecf905ec8a0cd40506467725fea5b9a88057f355b1653e565be"} Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.493306 4880 scope.go:117] "RemoveContainer" containerID="7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.493409 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6z6h" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.537423 4880 scope.go:117] "RemoveContainer" containerID="dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.538273 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6z6h"] Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.547341 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j6z6h"] Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.592158 4880 scope.go:117] "RemoveContainer" containerID="91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.614655 4880 scope.go:117] "RemoveContainer" containerID="7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604" Dec 01 04:43:00 crc kubenswrapper[4880]: E1201 04:43:00.615148 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604\": container with ID starting with 7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604 not found: ID does not exist" containerID="7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.615176 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604"} err="failed to get container status \"7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604\": rpc error: code = NotFound desc = could not find container \"7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604\": container with ID starting with 7121bd26c4780c4b7415880e04d30cf738554c1f43053a0627b20dc6e753d604 not found: ID does not exist" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.615196 4880 scope.go:117] "RemoveContainer" containerID="dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515" Dec 01 04:43:00 crc kubenswrapper[4880]: E1201 04:43:00.615569 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515\": container with ID starting with dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515 not found: ID does not exist" containerID="dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.615589 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515"} err="failed to get container status \"dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515\": rpc error: code = NotFound desc = could not find container \"dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515\": container with ID starting with dfd1ebd0c90fd09b966bef68d3873ee955e349440c10d974a11996f34c79b515 not found: ID does not exist" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.615600 4880 scope.go:117] "RemoveContainer" containerID="91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9" Dec 01 04:43:00 crc kubenswrapper[4880]: E1201 04:43:00.615907 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9\": container with ID starting with 91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9 not found: ID does not exist" containerID="91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.615961 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9"} err="failed to get container status \"91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9\": rpc error: code = NotFound desc = could not find container \"91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9\": container with ID starting with 91af8c1924653ebf7c53635f88c49003c5fe7a3115d4efba133600c52db6b7f9 not found: ID does not exist" Dec 01 04:43:00 crc kubenswrapper[4880]: I1201 04:43:00.799109 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" path="/var/lib/kubelet/pods/02e4005d-90f5-4dce-975d-9fbba318c495/volumes" Dec 01 04:43:02 crc kubenswrapper[4880]: I1201 04:43:02.784478 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:43:02 crc kubenswrapper[4880]: E1201 04:43:02.785027 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:43:16 crc kubenswrapper[4880]: I1201 04:43:16.783989 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:43:16 crc kubenswrapper[4880]: E1201 04:43:16.785186 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:43:29 crc kubenswrapper[4880]: I1201 04:43:29.784480 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:43:29 crc kubenswrapper[4880]: E1201 04:43:29.785241 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:43:44 crc kubenswrapper[4880]: I1201 04:43:44.784855 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:43:44 crc kubenswrapper[4880]: E1201 04:43:44.786703 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.877651 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v2bq9"] Dec 01 04:43:47 crc kubenswrapper[4880]: E1201 04:43:47.878394 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerName="extract-utilities" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.878408 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerName="extract-utilities" Dec 01 04:43:47 crc kubenswrapper[4880]: E1201 04:43:47.878429 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="extract-utilities" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.878437 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="extract-utilities" Dec 01 04:43:47 crc kubenswrapper[4880]: E1201 04:43:47.878452 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="registry-server" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.878460 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="registry-server" Dec 01 04:43:47 crc kubenswrapper[4880]: E1201 04:43:47.878476 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="extract-content" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.878484 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="extract-content" Dec 01 04:43:47 crc kubenswrapper[4880]: E1201 04:43:47.878499 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerName="registry-server" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.878507 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerName="registry-server" Dec 01 04:43:47 crc kubenswrapper[4880]: E1201 04:43:47.878523 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerName="extract-content" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.878531 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerName="extract-content" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.878757 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ebbd1c4-3efe-4187-93f5-9ebe7daa75c0" containerName="registry-server" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.878782 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e4005d-90f5-4dce-975d-9fbba318c495" containerName="registry-server" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.880377 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:47 crc kubenswrapper[4880]: I1201 04:43:47.898076 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2bq9"] Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.030521 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-utilities\") pod \"community-operators-v2bq9\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.030693 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ppk\" (UniqueName: \"kubernetes.io/projected/82726f79-3b00-48e3-bc16-ccacb03d0f28-kube-api-access-r2ppk\") pod \"community-operators-v2bq9\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.030720 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-catalog-content\") pod \"community-operators-v2bq9\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.131968 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-utilities\") pod \"community-operators-v2bq9\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.132102 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ppk\" (UniqueName: \"kubernetes.io/projected/82726f79-3b00-48e3-bc16-ccacb03d0f28-kube-api-access-r2ppk\") pod \"community-operators-v2bq9\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.132120 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-catalog-content\") pod \"community-operators-v2bq9\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.132465 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-utilities\") pod \"community-operators-v2bq9\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.132514 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-catalog-content\") pod \"community-operators-v2bq9\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.158370 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ppk\" (UniqueName: \"kubernetes.io/projected/82726f79-3b00-48e3-bc16-ccacb03d0f28-kube-api-access-r2ppk\") pod \"community-operators-v2bq9\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.248443 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:48 crc kubenswrapper[4880]: I1201 04:43:48.750805 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2bq9"] Dec 01 04:43:48 crc kubenswrapper[4880]: W1201 04:43:48.753342 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82726f79_3b00_48e3_bc16_ccacb03d0f28.slice/crio-60cd9e5651cc85523cf4dd2946b69bd68bef0245f0de97a81f212d42ee637f77 WatchSource:0}: Error finding container 60cd9e5651cc85523cf4dd2946b69bd68bef0245f0de97a81f212d42ee637f77: Status 404 returned error can't find the container with id 60cd9e5651cc85523cf4dd2946b69bd68bef0245f0de97a81f212d42ee637f77 Dec 01 04:43:49 crc kubenswrapper[4880]: I1201 04:43:49.020129 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bq9" event={"ID":"82726f79-3b00-48e3-bc16-ccacb03d0f28","Type":"ContainerStarted","Data":"4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9"} Dec 01 04:43:49 crc kubenswrapper[4880]: I1201 04:43:49.020183 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bq9" event={"ID":"82726f79-3b00-48e3-bc16-ccacb03d0f28","Type":"ContainerStarted","Data":"60cd9e5651cc85523cf4dd2946b69bd68bef0245f0de97a81f212d42ee637f77"} Dec 01 04:43:50 crc kubenswrapper[4880]: I1201 04:43:50.029709 4880 generic.go:334] "Generic (PLEG): container finished" podID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerID="4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9" exitCode=0 Dec 01 04:43:50 crc kubenswrapper[4880]: I1201 04:43:50.029834 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bq9" event={"ID":"82726f79-3b00-48e3-bc16-ccacb03d0f28","Type":"ContainerDied","Data":"4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9"} Dec 01 04:43:51 crc kubenswrapper[4880]: I1201 04:43:51.043589 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bq9" event={"ID":"82726f79-3b00-48e3-bc16-ccacb03d0f28","Type":"ContainerStarted","Data":"384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0"} Dec 01 04:43:52 crc kubenswrapper[4880]: I1201 04:43:52.055746 4880 generic.go:334] "Generic (PLEG): container finished" podID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerID="384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0" exitCode=0 Dec 01 04:43:52 crc kubenswrapper[4880]: I1201 04:43:52.055800 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bq9" event={"ID":"82726f79-3b00-48e3-bc16-ccacb03d0f28","Type":"ContainerDied","Data":"384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0"} Dec 01 04:43:53 crc kubenswrapper[4880]: I1201 04:43:53.068618 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bq9" event={"ID":"82726f79-3b00-48e3-bc16-ccacb03d0f28","Type":"ContainerStarted","Data":"244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0"} Dec 01 04:43:53 crc kubenswrapper[4880]: I1201 04:43:53.088447 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v2bq9" podStartSLOduration=3.618891747 podStartE2EDuration="6.088432095s" podCreationTimestamp="2025-12-01 04:43:47 +0000 UTC" firstStartedPulling="2025-12-01 04:43:50.035250666 +0000 UTC m=+6459.546505038" lastFinishedPulling="2025-12-01 04:43:52.504791014 +0000 UTC m=+6462.016045386" observedRunningTime="2025-12-01 04:43:53.086895157 +0000 UTC m=+6462.598149529" watchObservedRunningTime="2025-12-01 04:43:53.088432095 +0000 UTC m=+6462.599686467" Dec 01 04:43:58 crc kubenswrapper[4880]: I1201 04:43:58.249172 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:58 crc kubenswrapper[4880]: I1201 04:43:58.250035 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:58 crc kubenswrapper[4880]: I1201 04:43:58.321619 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:59 crc kubenswrapper[4880]: I1201 04:43:59.218133 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:43:59 crc kubenswrapper[4880]: I1201 04:43:59.784053 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:43:59 crc kubenswrapper[4880]: E1201 04:43:59.784315 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:43:59 crc kubenswrapper[4880]: I1201 04:43:59.856289 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2bq9"] Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.152100 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v2bq9" podUID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerName="registry-server" containerID="cri-o://244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0" gracePeriod=2 Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.698579 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.770756 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ppk\" (UniqueName: \"kubernetes.io/projected/82726f79-3b00-48e3-bc16-ccacb03d0f28-kube-api-access-r2ppk\") pod \"82726f79-3b00-48e3-bc16-ccacb03d0f28\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.772080 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-catalog-content\") pod \"82726f79-3b00-48e3-bc16-ccacb03d0f28\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.772275 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-utilities\") pod \"82726f79-3b00-48e3-bc16-ccacb03d0f28\" (UID: \"82726f79-3b00-48e3-bc16-ccacb03d0f28\") " Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.774033 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-utilities" (OuterVolumeSpecName: "utilities") pod "82726f79-3b00-48e3-bc16-ccacb03d0f28" (UID: "82726f79-3b00-48e3-bc16-ccacb03d0f28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.797685 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82726f79-3b00-48e3-bc16-ccacb03d0f28-kube-api-access-r2ppk" (OuterVolumeSpecName: "kube-api-access-r2ppk") pod "82726f79-3b00-48e3-bc16-ccacb03d0f28" (UID: "82726f79-3b00-48e3-bc16-ccacb03d0f28"). InnerVolumeSpecName "kube-api-access-r2ppk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.875125 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.875171 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2ppk\" (UniqueName: \"kubernetes.io/projected/82726f79-3b00-48e3-bc16-ccacb03d0f28-kube-api-access-r2ppk\") on node \"crc\" DevicePath \"\"" Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.942391 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82726f79-3b00-48e3-bc16-ccacb03d0f28" (UID: "82726f79-3b00-48e3-bc16-ccacb03d0f28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:44:01 crc kubenswrapper[4880]: I1201 04:44:01.977037 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82726f79-3b00-48e3-bc16-ccacb03d0f28-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.166533 4880 generic.go:334] "Generic (PLEG): container finished" podID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerID="244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0" exitCode=0 Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.166587 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bq9" event={"ID":"82726f79-3b00-48e3-bc16-ccacb03d0f28","Type":"ContainerDied","Data":"244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0"} Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.166619 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2bq9" event={"ID":"82726f79-3b00-48e3-bc16-ccacb03d0f28","Type":"ContainerDied","Data":"60cd9e5651cc85523cf4dd2946b69bd68bef0245f0de97a81f212d42ee637f77"} Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.166641 4880 scope.go:117] "RemoveContainer" containerID="244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.167035 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2bq9" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.199667 4880 scope.go:117] "RemoveContainer" containerID="384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.221306 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2bq9"] Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.232227 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v2bq9"] Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.238569 4880 scope.go:117] "RemoveContainer" containerID="4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.282952 4880 scope.go:117] "RemoveContainer" containerID="244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0" Dec 01 04:44:02 crc kubenswrapper[4880]: E1201 04:44:02.285210 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0\": container with ID starting with 244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0 not found: ID does not exist" containerID="244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.285246 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0"} err="failed to get container status \"244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0\": rpc error: code = NotFound desc = could not find container \"244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0\": container with ID starting with 244e1ad9cab7b72f7b44860dda6626d3613fc1871efe6dedfdced263237bd2b0 not found: ID does not exist" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.285270 4880 scope.go:117] "RemoveContainer" containerID="384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0" Dec 01 04:44:02 crc kubenswrapper[4880]: E1201 04:44:02.285592 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0\": container with ID starting with 384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0 not found: ID does not exist" containerID="384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.285611 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0"} err="failed to get container status \"384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0\": rpc error: code = NotFound desc = could not find container \"384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0\": container with ID starting with 384fc9fdf6182c38364434d61b212ff53897d2fe76d2d15e14407486adf63ff0 not found: ID does not exist" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.285622 4880 scope.go:117] "RemoveContainer" containerID="4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9" Dec 01 04:44:02 crc kubenswrapper[4880]: E1201 04:44:02.286002 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9\": container with ID starting with 4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9 not found: ID does not exist" containerID="4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.286026 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9"} err="failed to get container status \"4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9\": rpc error: code = NotFound desc = could not find container \"4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9\": container with ID starting with 4ca1b1b2bd2017eea89d36c942db4ed611832bf7dfe02ac600ec242a8d04fcc9 not found: ID does not exist" Dec 01 04:44:02 crc kubenswrapper[4880]: I1201 04:44:02.816525 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82726f79-3b00-48e3-bc16-ccacb03d0f28" path="/var/lib/kubelet/pods/82726f79-3b00-48e3-bc16-ccacb03d0f28/volumes" Dec 01 04:44:11 crc kubenswrapper[4880]: I1201 04:44:11.784964 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:44:11 crc kubenswrapper[4880]: E1201 04:44:11.788358 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:44:22 crc kubenswrapper[4880]: I1201 04:44:22.785236 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:44:22 crc kubenswrapper[4880]: E1201 04:44:22.786011 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:44:36 crc kubenswrapper[4880]: I1201 04:44:36.783936 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:44:36 crc kubenswrapper[4880]: E1201 04:44:36.784535 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:44:50 crc kubenswrapper[4880]: I1201 04:44:50.795676 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:44:50 crc kubenswrapper[4880]: E1201 04:44:50.798032 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.183890 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv"] Dec 01 04:45:00 crc kubenswrapper[4880]: E1201 04:45:00.184858 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerName="registry-server" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.184895 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerName="registry-server" Dec 01 04:45:00 crc kubenswrapper[4880]: E1201 04:45:00.184923 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerName="extract-content" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.184932 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerName="extract-content" Dec 01 04:45:00 crc kubenswrapper[4880]: E1201 04:45:00.184958 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerName="extract-utilities" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.184966 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerName="extract-utilities" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.185201 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="82726f79-3b00-48e3-bc16-ccacb03d0f28" containerName="registry-server" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.186274 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.198060 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.208522 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv"] Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.248488 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.289353 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69df4749-538a-4da6-ab2f-99dd1d6b8d55-config-volume\") pod \"collect-profiles-29409405-587zv\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.289412 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69df4749-538a-4da6-ab2f-99dd1d6b8d55-secret-volume\") pod \"collect-profiles-29409405-587zv\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.289459 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4whr\" (UniqueName: \"kubernetes.io/projected/69df4749-538a-4da6-ab2f-99dd1d6b8d55-kube-api-access-j4whr\") pod \"collect-profiles-29409405-587zv\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.391854 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69df4749-538a-4da6-ab2f-99dd1d6b8d55-config-volume\") pod \"collect-profiles-29409405-587zv\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.391934 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69df4749-538a-4da6-ab2f-99dd1d6b8d55-secret-volume\") pod \"collect-profiles-29409405-587zv\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.391988 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4whr\" (UniqueName: \"kubernetes.io/projected/69df4749-538a-4da6-ab2f-99dd1d6b8d55-kube-api-access-j4whr\") pod \"collect-profiles-29409405-587zv\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.392797 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69df4749-538a-4da6-ab2f-99dd1d6b8d55-config-volume\") pod \"collect-profiles-29409405-587zv\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.399049 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69df4749-538a-4da6-ab2f-99dd1d6b8d55-secret-volume\") pod \"collect-profiles-29409405-587zv\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.409615 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4whr\" (UniqueName: \"kubernetes.io/projected/69df4749-538a-4da6-ab2f-99dd1d6b8d55-kube-api-access-j4whr\") pod \"collect-profiles-29409405-587zv\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.512582 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:00 crc kubenswrapper[4880]: I1201 04:45:00.821852 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv"] Dec 01 04:45:01 crc kubenswrapper[4880]: I1201 04:45:01.744645 4880 generic.go:334] "Generic (PLEG): container finished" podID="69df4749-538a-4da6-ab2f-99dd1d6b8d55" containerID="b2c78f2eae30b91f646cb849077d2abc7b92aef20adcc5c71e6fb19cad7a7214" exitCode=0 Dec 01 04:45:01 crc kubenswrapper[4880]: I1201 04:45:01.744699 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" event={"ID":"69df4749-538a-4da6-ab2f-99dd1d6b8d55","Type":"ContainerDied","Data":"b2c78f2eae30b91f646cb849077d2abc7b92aef20adcc5c71e6fb19cad7a7214"} Dec 01 04:45:01 crc kubenswrapper[4880]: I1201 04:45:01.744741 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" event={"ID":"69df4749-538a-4da6-ab2f-99dd1d6b8d55","Type":"ContainerStarted","Data":"423587ebaa3064c4229c36fd856f0ab2b3fc1da6c1467b55bfabec0b20502ea7"} Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.279778 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.355847 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69df4749-538a-4da6-ab2f-99dd1d6b8d55-config-volume\") pod \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.356995 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69df4749-538a-4da6-ab2f-99dd1d6b8d55-secret-volume\") pod \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.357077 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4whr\" (UniqueName: \"kubernetes.io/projected/69df4749-538a-4da6-ab2f-99dd1d6b8d55-kube-api-access-j4whr\") pod \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\" (UID: \"69df4749-538a-4da6-ab2f-99dd1d6b8d55\") " Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.360412 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69df4749-538a-4da6-ab2f-99dd1d6b8d55-config-volume" (OuterVolumeSpecName: "config-volume") pod "69df4749-538a-4da6-ab2f-99dd1d6b8d55" (UID: "69df4749-538a-4da6-ab2f-99dd1d6b8d55"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.379406 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69df4749-538a-4da6-ab2f-99dd1d6b8d55-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "69df4749-538a-4da6-ab2f-99dd1d6b8d55" (UID: "69df4749-538a-4da6-ab2f-99dd1d6b8d55"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.380919 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69df4749-538a-4da6-ab2f-99dd1d6b8d55-kube-api-access-j4whr" (OuterVolumeSpecName: "kube-api-access-j4whr") pod "69df4749-538a-4da6-ab2f-99dd1d6b8d55" (UID: "69df4749-538a-4da6-ab2f-99dd1d6b8d55"). InnerVolumeSpecName "kube-api-access-j4whr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.459730 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69df4749-538a-4da6-ab2f-99dd1d6b8d55-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.459758 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69df4749-538a-4da6-ab2f-99dd1d6b8d55-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.459768 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4whr\" (UniqueName: \"kubernetes.io/projected/69df4749-538a-4da6-ab2f-99dd1d6b8d55-kube-api-access-j4whr\") on node \"crc\" DevicePath \"\"" Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.769297 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" event={"ID":"69df4749-538a-4da6-ab2f-99dd1d6b8d55","Type":"ContainerDied","Data":"423587ebaa3064c4229c36fd856f0ab2b3fc1da6c1467b55bfabec0b20502ea7"} Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.769474 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv" Dec 01 04:45:03 crc kubenswrapper[4880]: I1201 04:45:03.769343 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="423587ebaa3064c4229c36fd856f0ab2b3fc1da6c1467b55bfabec0b20502ea7" Dec 01 04:45:04 crc kubenswrapper[4880]: I1201 04:45:04.409181 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6"] Dec 01 04:45:04 crc kubenswrapper[4880]: I1201 04:45:04.419503 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409360-hhwm6"] Dec 01 04:45:04 crc kubenswrapper[4880]: I1201 04:45:04.783562 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:45:04 crc kubenswrapper[4880]: E1201 04:45:04.784103 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:45:04 crc kubenswrapper[4880]: I1201 04:45:04.798262 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f53e41ca-9a2b-446a-8d21-87b68bcbe84b" path="/var/lib/kubelet/pods/f53e41ca-9a2b-446a-8d21-87b68bcbe84b/volumes" Dec 01 04:45:15 crc kubenswrapper[4880]: I1201 04:45:15.784762 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:45:15 crc kubenswrapper[4880]: E1201 04:45:15.786118 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:45:29 crc kubenswrapper[4880]: I1201 04:45:29.784619 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:45:29 crc kubenswrapper[4880]: E1201 04:45:29.785462 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:45:41 crc kubenswrapper[4880]: I1201 04:45:41.785632 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:45:41 crc kubenswrapper[4880]: E1201 04:45:41.786661 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:45:44 crc kubenswrapper[4880]: I1201 04:45:44.792874 4880 scope.go:117] "RemoveContainer" containerID="de0676f11686e2ecbb741cfbbe7e19e9f9276193a47446533c3149b7d57c5ca9" Dec 01 04:45:52 crc kubenswrapper[4880]: I1201 04:45:52.783581 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:45:52 crc kubenswrapper[4880]: E1201 04:45:52.784248 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:46:06 crc kubenswrapper[4880]: I1201 04:46:06.785269 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:46:06 crc kubenswrapper[4880]: E1201 04:46:06.786805 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:46:17 crc kubenswrapper[4880]: I1201 04:46:17.784346 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:46:17 crc kubenswrapper[4880]: E1201 04:46:17.785390 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:46:30 crc kubenswrapper[4880]: I1201 04:46:30.797051 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:46:30 crc kubenswrapper[4880]: E1201 04:46:30.797851 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:46:36 crc kubenswrapper[4880]: I1201 04:46:36.835512 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pl4g9"] Dec 01 04:46:36 crc kubenswrapper[4880]: E1201 04:46:36.839358 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69df4749-538a-4da6-ab2f-99dd1d6b8d55" containerName="collect-profiles" Dec 01 04:46:36 crc kubenswrapper[4880]: I1201 04:46:36.839394 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="69df4749-538a-4da6-ab2f-99dd1d6b8d55" containerName="collect-profiles" Dec 01 04:46:36 crc kubenswrapper[4880]: I1201 04:46:36.839719 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="69df4749-538a-4da6-ab2f-99dd1d6b8d55" containerName="collect-profiles" Dec 01 04:46:36 crc kubenswrapper[4880]: I1201 04:46:36.841516 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:36 crc kubenswrapper[4880]: I1201 04:46:36.853326 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pl4g9"] Dec 01 04:46:36 crc kubenswrapper[4880]: I1201 04:46:36.927550 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjbxt\" (UniqueName: \"kubernetes.io/projected/d88e40c5-6e17-414d-be7e-be4a30d38216-kube-api-access-cjbxt\") pod \"certified-operators-pl4g9\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:36 crc kubenswrapper[4880]: I1201 04:46:36.928223 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-utilities\") pod \"certified-operators-pl4g9\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:36 crc kubenswrapper[4880]: I1201 04:46:36.928416 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-catalog-content\") pod \"certified-operators-pl4g9\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:37 crc kubenswrapper[4880]: I1201 04:46:37.030031 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-catalog-content\") pod \"certified-operators-pl4g9\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:37 crc kubenswrapper[4880]: I1201 04:46:37.030167 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjbxt\" (UniqueName: \"kubernetes.io/projected/d88e40c5-6e17-414d-be7e-be4a30d38216-kube-api-access-cjbxt\") pod \"certified-operators-pl4g9\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:37 crc kubenswrapper[4880]: I1201 04:46:37.030195 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-utilities\") pod \"certified-operators-pl4g9\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:37 crc kubenswrapper[4880]: I1201 04:46:37.030860 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-utilities\") pod \"certified-operators-pl4g9\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:37 crc kubenswrapper[4880]: I1201 04:46:37.030861 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-catalog-content\") pod \"certified-operators-pl4g9\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:37 crc kubenswrapper[4880]: I1201 04:46:37.056637 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjbxt\" (UniqueName: \"kubernetes.io/projected/d88e40c5-6e17-414d-be7e-be4a30d38216-kube-api-access-cjbxt\") pod \"certified-operators-pl4g9\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:37 crc kubenswrapper[4880]: I1201 04:46:37.202969 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:37 crc kubenswrapper[4880]: I1201 04:46:37.886196 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pl4g9"] Dec 01 04:46:38 crc kubenswrapper[4880]: I1201 04:46:38.843809 4880 generic.go:334] "Generic (PLEG): container finished" podID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerID="1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7" exitCode=0 Dec 01 04:46:38 crc kubenswrapper[4880]: I1201 04:46:38.843885 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4g9" event={"ID":"d88e40c5-6e17-414d-be7e-be4a30d38216","Type":"ContainerDied","Data":"1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7"} Dec 01 04:46:38 crc kubenswrapper[4880]: I1201 04:46:38.844313 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4g9" event={"ID":"d88e40c5-6e17-414d-be7e-be4a30d38216","Type":"ContainerStarted","Data":"3ab6bc5026a5a676a2c224b90e6de51ababb8c3b6f25bd31fafe7dd911396a11"} Dec 01 04:46:39 crc kubenswrapper[4880]: I1201 04:46:39.857157 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4g9" event={"ID":"d88e40c5-6e17-414d-be7e-be4a30d38216","Type":"ContainerStarted","Data":"07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93"} Dec 01 04:46:41 crc kubenswrapper[4880]: I1201 04:46:41.875556 4880 generic.go:334] "Generic (PLEG): container finished" podID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerID="07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93" exitCode=0 Dec 01 04:46:41 crc kubenswrapper[4880]: I1201 04:46:41.875843 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4g9" event={"ID":"d88e40c5-6e17-414d-be7e-be4a30d38216","Type":"ContainerDied","Data":"07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93"} Dec 01 04:46:42 crc kubenswrapper[4880]: I1201 04:46:42.891536 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4g9" event={"ID":"d88e40c5-6e17-414d-be7e-be4a30d38216","Type":"ContainerStarted","Data":"51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6"} Dec 01 04:46:42 crc kubenswrapper[4880]: I1201 04:46:42.917359 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pl4g9" podStartSLOduration=3.449156041 podStartE2EDuration="6.917338978s" podCreationTimestamp="2025-12-01 04:46:36 +0000 UTC" firstStartedPulling="2025-12-01 04:46:38.846608117 +0000 UTC m=+6628.357862489" lastFinishedPulling="2025-12-01 04:46:42.314791044 +0000 UTC m=+6631.826045426" observedRunningTime="2025-12-01 04:46:42.916907277 +0000 UTC m=+6632.428161679" watchObservedRunningTime="2025-12-01 04:46:42.917338978 +0000 UTC m=+6632.428593370" Dec 01 04:46:43 crc kubenswrapper[4880]: I1201 04:46:43.784419 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:46:43 crc kubenswrapper[4880]: E1201 04:46:43.784788 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:46:47 crc kubenswrapper[4880]: I1201 04:46:47.203310 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:47 crc kubenswrapper[4880]: I1201 04:46:47.203828 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:47 crc kubenswrapper[4880]: I1201 04:46:47.251592 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:48 crc kubenswrapper[4880]: I1201 04:46:48.016089 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:48 crc kubenswrapper[4880]: I1201 04:46:48.082805 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pl4g9"] Dec 01 04:46:49 crc kubenswrapper[4880]: I1201 04:46:49.969457 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pl4g9" podUID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerName="registry-server" containerID="cri-o://51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6" gracePeriod=2 Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.438639 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.603253 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjbxt\" (UniqueName: \"kubernetes.io/projected/d88e40c5-6e17-414d-be7e-be4a30d38216-kube-api-access-cjbxt\") pod \"d88e40c5-6e17-414d-be7e-be4a30d38216\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.603309 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-catalog-content\") pod \"d88e40c5-6e17-414d-be7e-be4a30d38216\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.603511 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-utilities\") pod \"d88e40c5-6e17-414d-be7e-be4a30d38216\" (UID: \"d88e40c5-6e17-414d-be7e-be4a30d38216\") " Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.604212 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-utilities" (OuterVolumeSpecName: "utilities") pod "d88e40c5-6e17-414d-be7e-be4a30d38216" (UID: "d88e40c5-6e17-414d-be7e-be4a30d38216"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.614090 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88e40c5-6e17-414d-be7e-be4a30d38216-kube-api-access-cjbxt" (OuterVolumeSpecName: "kube-api-access-cjbxt") pod "d88e40c5-6e17-414d-be7e-be4a30d38216" (UID: "d88e40c5-6e17-414d-be7e-be4a30d38216"). InnerVolumeSpecName "kube-api-access-cjbxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.655502 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d88e40c5-6e17-414d-be7e-be4a30d38216" (UID: "d88e40c5-6e17-414d-be7e-be4a30d38216"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.705624 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.705662 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjbxt\" (UniqueName: \"kubernetes.io/projected/d88e40c5-6e17-414d-be7e-be4a30d38216-kube-api-access-cjbxt\") on node \"crc\" DevicePath \"\"" Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.705677 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e40c5-6e17-414d-be7e-be4a30d38216-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.979856 4880 generic.go:334] "Generic (PLEG): container finished" podID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerID="51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6" exitCode=0 Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.979949 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4g9" event={"ID":"d88e40c5-6e17-414d-be7e-be4a30d38216","Type":"ContainerDied","Data":"51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6"} Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.979978 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl4g9" event={"ID":"d88e40c5-6e17-414d-be7e-be4a30d38216","Type":"ContainerDied","Data":"3ab6bc5026a5a676a2c224b90e6de51ababb8c3b6f25bd31fafe7dd911396a11"} Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.979994 4880 scope.go:117] "RemoveContainer" containerID="51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6" Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.980127 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl4g9" Dec 01 04:46:50 crc kubenswrapper[4880]: I1201 04:46:50.999620 4880 scope.go:117] "RemoveContainer" containerID="07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93" Dec 01 04:46:51 crc kubenswrapper[4880]: I1201 04:46:51.020053 4880 scope.go:117] "RemoveContainer" containerID="1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7" Dec 01 04:46:51 crc kubenswrapper[4880]: I1201 04:46:51.021397 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pl4g9"] Dec 01 04:46:51 crc kubenswrapper[4880]: I1201 04:46:51.034466 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pl4g9"] Dec 01 04:46:51 crc kubenswrapper[4880]: I1201 04:46:51.066250 4880 scope.go:117] "RemoveContainer" containerID="51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6" Dec 01 04:46:51 crc kubenswrapper[4880]: E1201 04:46:51.066595 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6\": container with ID starting with 51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6 not found: ID does not exist" containerID="51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6" Dec 01 04:46:51 crc kubenswrapper[4880]: I1201 04:46:51.066633 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6"} err="failed to get container status \"51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6\": rpc error: code = NotFound desc = could not find container \"51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6\": container with ID starting with 51826cfb35bd72f3eadffd41b364141dd27cd29c6805e0ecce91c074a1b5f6b6 not found: ID does not exist" Dec 01 04:46:51 crc kubenswrapper[4880]: I1201 04:46:51.066660 4880 scope.go:117] "RemoveContainer" containerID="07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93" Dec 01 04:46:51 crc kubenswrapper[4880]: E1201 04:46:51.066913 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93\": container with ID starting with 07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93 not found: ID does not exist" containerID="07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93" Dec 01 04:46:51 crc kubenswrapper[4880]: I1201 04:46:51.066941 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93"} err="failed to get container status \"07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93\": rpc error: code = NotFound desc = could not find container \"07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93\": container with ID starting with 07399ca5e771fb23d2b1c77319d67fbda94f1f5362e5bcb89e35371502af2a93 not found: ID does not exist" Dec 01 04:46:51 crc kubenswrapper[4880]: I1201 04:46:51.066957 4880 scope.go:117] "RemoveContainer" containerID="1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7" Dec 01 04:46:51 crc kubenswrapper[4880]: E1201 04:46:51.067312 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7\": container with ID starting with 1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7 not found: ID does not exist" containerID="1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7" Dec 01 04:46:51 crc kubenswrapper[4880]: I1201 04:46:51.067363 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7"} err="failed to get container status \"1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7\": rpc error: code = NotFound desc = could not find container \"1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7\": container with ID starting with 1e5c325eb9f770ec15eb2bc78760767dd41628e59fc3408b960b9b9e452021c7 not found: ID does not exist" Dec 01 04:46:52 crc kubenswrapper[4880]: I1201 04:46:52.800958 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88e40c5-6e17-414d-be7e-be4a30d38216" path="/var/lib/kubelet/pods/d88e40c5-6e17-414d-be7e-be4a30d38216/volumes" Dec 01 04:46:58 crc kubenswrapper[4880]: I1201 04:46:58.785480 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:46:58 crc kubenswrapper[4880]: E1201 04:46:58.786426 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:47:12 crc kubenswrapper[4880]: I1201 04:47:12.788164 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:47:12 crc kubenswrapper[4880]: E1201 04:47:12.789612 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:47:17 crc kubenswrapper[4880]: E1201 04:47:17.814501 4880 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.39:55216->38.102.83.39:42095: write tcp 38.102.83.39:55216->38.102.83.39:42095: write: broken pipe Dec 01 04:47:26 crc kubenswrapper[4880]: I1201 04:47:26.785394 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:47:27 crc kubenswrapper[4880]: I1201 04:47:27.364959 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"4db240966bb5aca8ada6478180f59a4290577a600d4defaea9c20d7bd5b6dd9f"} Dec 01 04:48:25 crc kubenswrapper[4880]: E1201 04:48:25.577188 4880 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.39:57634->38.102.83.39:42095: read tcp 38.102.83.39:57634->38.102.83.39:42095: read: connection reset by peer Dec 01 04:48:45 crc kubenswrapper[4880]: E1201 04:48:45.240395 4880 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.39:39758->38.102.83.39:42095: read tcp 38.102.83.39:39758->38.102.83.39:42095: read: connection reset by peer Dec 01 04:49:47 crc kubenswrapper[4880]: I1201 04:49:47.406081 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:49:47 crc kubenswrapper[4880]: I1201 04:49:47.408436 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:50:17 crc kubenswrapper[4880]: I1201 04:50:17.368866 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:50:17 crc kubenswrapper[4880]: I1201 04:50:17.369515 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:50:47 crc kubenswrapper[4880]: I1201 04:50:47.369262 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:50:47 crc kubenswrapper[4880]: I1201 04:50:47.372087 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:50:47 crc kubenswrapper[4880]: I1201 04:50:47.372321 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:50:47 crc kubenswrapper[4880]: I1201 04:50:47.373705 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4db240966bb5aca8ada6478180f59a4290577a600d4defaea9c20d7bd5b6dd9f"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:50:47 crc kubenswrapper[4880]: I1201 04:50:47.373997 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://4db240966bb5aca8ada6478180f59a4290577a600d4defaea9c20d7bd5b6dd9f" gracePeriod=600 Dec 01 04:50:47 crc kubenswrapper[4880]: I1201 04:50:47.520906 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="4db240966bb5aca8ada6478180f59a4290577a600d4defaea9c20d7bd5b6dd9f" exitCode=0 Dec 01 04:50:47 crc kubenswrapper[4880]: I1201 04:50:47.521012 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"4db240966bb5aca8ada6478180f59a4290577a600d4defaea9c20d7bd5b6dd9f"} Dec 01 04:50:47 crc kubenswrapper[4880]: I1201 04:50:47.521531 4880 scope.go:117] "RemoveContainer" containerID="fb9553f0a90720517430479cf2a705ae4026e5a56d92d10c38b74d7f80c7eae6" Dec 01 04:50:48 crc kubenswrapper[4880]: I1201 04:50:48.542838 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9"} Dec 01 04:52:47 crc kubenswrapper[4880]: I1201 04:52:47.369211 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:52:47 crc kubenswrapper[4880]: I1201 04:52:47.369818 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:53:17 crc kubenswrapper[4880]: I1201 04:53:17.369344 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:53:17 crc kubenswrapper[4880]: I1201 04:53:17.369971 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.816525 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9zmh"] Dec 01 04:53:23 crc kubenswrapper[4880]: E1201 04:53:23.817642 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerName="extract-utilities" Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.817664 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerName="extract-utilities" Dec 01 04:53:23 crc kubenswrapper[4880]: E1201 04:53:23.817688 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerName="registry-server" Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.817698 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerName="registry-server" Dec 01 04:53:23 crc kubenswrapper[4880]: E1201 04:53:23.817711 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerName="extract-content" Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.817721 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerName="extract-content" Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.817976 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88e40c5-6e17-414d-be7e-be4a30d38216" containerName="registry-server" Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.820321 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.859254 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9zmh"] Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.977588 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-utilities\") pod \"redhat-operators-j9zmh\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.977662 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-catalog-content\") pod \"redhat-operators-j9zmh\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:23 crc kubenswrapper[4880]: I1201 04:53:23.977752 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrw9m\" (UniqueName: \"kubernetes.io/projected/c23e0765-f7c7-4e57-baa6-44c8782f51ba-kube-api-access-jrw9m\") pod \"redhat-operators-j9zmh\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:24 crc kubenswrapper[4880]: I1201 04:53:24.080046 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-utilities\") pod \"redhat-operators-j9zmh\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:24 crc kubenswrapper[4880]: I1201 04:53:24.080145 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-catalog-content\") pod \"redhat-operators-j9zmh\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:24 crc kubenswrapper[4880]: I1201 04:53:24.080274 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrw9m\" (UniqueName: \"kubernetes.io/projected/c23e0765-f7c7-4e57-baa6-44c8782f51ba-kube-api-access-jrw9m\") pod \"redhat-operators-j9zmh\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:24 crc kubenswrapper[4880]: I1201 04:53:24.081192 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-utilities\") pod \"redhat-operators-j9zmh\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:24 crc kubenswrapper[4880]: I1201 04:53:24.081381 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-catalog-content\") pod \"redhat-operators-j9zmh\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:24 crc kubenswrapper[4880]: I1201 04:53:24.112786 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrw9m\" (UniqueName: \"kubernetes.io/projected/c23e0765-f7c7-4e57-baa6-44c8782f51ba-kube-api-access-jrw9m\") pod \"redhat-operators-j9zmh\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:24 crc kubenswrapper[4880]: I1201 04:53:24.165302 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:24 crc kubenswrapper[4880]: I1201 04:53:24.627205 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9zmh"] Dec 01 04:53:25 crc kubenswrapper[4880]: I1201 04:53:25.523665 4880 generic.go:334] "Generic (PLEG): container finished" podID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerID="ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60" exitCode=0 Dec 01 04:53:25 crc kubenswrapper[4880]: I1201 04:53:25.523744 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9zmh" event={"ID":"c23e0765-f7c7-4e57-baa6-44c8782f51ba","Type":"ContainerDied","Data":"ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60"} Dec 01 04:53:25 crc kubenswrapper[4880]: I1201 04:53:25.523954 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9zmh" event={"ID":"c23e0765-f7c7-4e57-baa6-44c8782f51ba","Type":"ContainerStarted","Data":"feefb2852bb9aa2645f21c811d5954a772386062cac18d49acc09cfeb3c7fd27"} Dec 01 04:53:25 crc kubenswrapper[4880]: I1201 04:53:25.527939 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 04:53:27 crc kubenswrapper[4880]: I1201 04:53:27.558799 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9zmh" event={"ID":"c23e0765-f7c7-4e57-baa6-44c8782f51ba","Type":"ContainerStarted","Data":"36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3"} Dec 01 04:53:29 crc kubenswrapper[4880]: I1201 04:53:29.584101 4880 generic.go:334] "Generic (PLEG): container finished" podID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerID="36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3" exitCode=0 Dec 01 04:53:29 crc kubenswrapper[4880]: I1201 04:53:29.584652 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9zmh" event={"ID":"c23e0765-f7c7-4e57-baa6-44c8782f51ba","Type":"ContainerDied","Data":"36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3"} Dec 01 04:53:30 crc kubenswrapper[4880]: I1201 04:53:30.595186 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9zmh" event={"ID":"c23e0765-f7c7-4e57-baa6-44c8782f51ba","Type":"ContainerStarted","Data":"9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877"} Dec 01 04:53:30 crc kubenswrapper[4880]: I1201 04:53:30.611377 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9zmh" podStartSLOduration=3.082134716 podStartE2EDuration="7.61134371s" podCreationTimestamp="2025-12-01 04:53:23 +0000 UTC" firstStartedPulling="2025-12-01 04:53:25.526199018 +0000 UTC m=+7035.037453390" lastFinishedPulling="2025-12-01 04:53:30.055408012 +0000 UTC m=+7039.566662384" observedRunningTime="2025-12-01 04:53:30.609939906 +0000 UTC m=+7040.121194278" watchObservedRunningTime="2025-12-01 04:53:30.61134371 +0000 UTC m=+7040.122598082" Dec 01 04:53:34 crc kubenswrapper[4880]: I1201 04:53:34.166275 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:34 crc kubenswrapper[4880]: I1201 04:53:34.166997 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:35 crc kubenswrapper[4880]: I1201 04:53:35.223766 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9zmh" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="registry-server" probeResult="failure" output=< Dec 01 04:53:35 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:53:35 crc kubenswrapper[4880]: > Dec 01 04:53:45 crc kubenswrapper[4880]: I1201 04:53:45.223813 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j9zmh" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="registry-server" probeResult="failure" output=< Dec 01 04:53:45 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 04:53:45 crc kubenswrapper[4880]: > Dec 01 04:53:47 crc kubenswrapper[4880]: I1201 04:53:47.369456 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 04:53:47 crc kubenswrapper[4880]: I1201 04:53:47.369949 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 04:53:47 crc kubenswrapper[4880]: I1201 04:53:47.370022 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 04:53:47 crc kubenswrapper[4880]: I1201 04:53:47.371415 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 04:53:47 crc kubenswrapper[4880]: I1201 04:53:47.371718 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" gracePeriod=600 Dec 01 04:53:47 crc kubenswrapper[4880]: E1201 04:53:47.505669 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:53:48 crc kubenswrapper[4880]: I1201 04:53:48.270230 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" exitCode=0 Dec 01 04:53:48 crc kubenswrapper[4880]: I1201 04:53:48.270282 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9"} Dec 01 04:53:48 crc kubenswrapper[4880]: I1201 04:53:48.270654 4880 scope.go:117] "RemoveContainer" containerID="4db240966bb5aca8ada6478180f59a4290577a600d4defaea9c20d7bd5b6dd9f" Dec 01 04:53:48 crc kubenswrapper[4880]: I1201 04:53:48.271834 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:53:48 crc kubenswrapper[4880]: E1201 04:53:48.272355 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:53:54 crc kubenswrapper[4880]: I1201 04:53:54.230036 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:54 crc kubenswrapper[4880]: I1201 04:53:54.293836 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:55 crc kubenswrapper[4880]: I1201 04:53:55.019163 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9zmh"] Dec 01 04:53:55 crc kubenswrapper[4880]: I1201 04:53:55.360436 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j9zmh" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="registry-server" containerID="cri-o://9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877" gracePeriod=2 Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.036008 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.144088 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrw9m\" (UniqueName: \"kubernetes.io/projected/c23e0765-f7c7-4e57-baa6-44c8782f51ba-kube-api-access-jrw9m\") pod \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.144848 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-catalog-content\") pod \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.145093 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-utilities\") pod \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\" (UID: \"c23e0765-f7c7-4e57-baa6-44c8782f51ba\") " Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.152996 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-utilities" (OuterVolumeSpecName: "utilities") pod "c23e0765-f7c7-4e57-baa6-44c8782f51ba" (UID: "c23e0765-f7c7-4e57-baa6-44c8782f51ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.154319 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23e0765-f7c7-4e57-baa6-44c8782f51ba-kube-api-access-jrw9m" (OuterVolumeSpecName: "kube-api-access-jrw9m") pod "c23e0765-f7c7-4e57-baa6-44c8782f51ba" (UID: "c23e0765-f7c7-4e57-baa6-44c8782f51ba"). InnerVolumeSpecName "kube-api-access-jrw9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.249203 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.249240 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrw9m\" (UniqueName: \"kubernetes.io/projected/c23e0765-f7c7-4e57-baa6-44c8782f51ba-kube-api-access-jrw9m\") on node \"crc\" DevicePath \"\"" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.266163 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c23e0765-f7c7-4e57-baa6-44c8782f51ba" (UID: "c23e0765-f7c7-4e57-baa6-44c8782f51ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.350098 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c23e0765-f7c7-4e57-baa6-44c8782f51ba-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.372750 4880 generic.go:334] "Generic (PLEG): container finished" podID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerID="9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877" exitCode=0 Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.372793 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9zmh" event={"ID":"c23e0765-f7c7-4e57-baa6-44c8782f51ba","Type":"ContainerDied","Data":"9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877"} Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.372819 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9zmh" event={"ID":"c23e0765-f7c7-4e57-baa6-44c8782f51ba","Type":"ContainerDied","Data":"feefb2852bb9aa2645f21c811d5954a772386062cac18d49acc09cfeb3c7fd27"} Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.372837 4880 scope.go:117] "RemoveContainer" containerID="9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.373078 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9zmh" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.395113 4880 scope.go:117] "RemoveContainer" containerID="36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.415327 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9zmh"] Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.424885 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j9zmh"] Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.432506 4880 scope.go:117] "RemoveContainer" containerID="ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.482427 4880 scope.go:117] "RemoveContainer" containerID="9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877" Dec 01 04:53:56 crc kubenswrapper[4880]: E1201 04:53:56.483363 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877\": container with ID starting with 9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877 not found: ID does not exist" containerID="9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.483405 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877"} err="failed to get container status \"9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877\": rpc error: code = NotFound desc = could not find container \"9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877\": container with ID starting with 9c31c75e7623953fa62fbca24d0d29850d927d45a6114e79728755422e681877 not found: ID does not exist" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.483431 4880 scope.go:117] "RemoveContainer" containerID="36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3" Dec 01 04:53:56 crc kubenswrapper[4880]: E1201 04:53:56.483789 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3\": container with ID starting with 36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3 not found: ID does not exist" containerID="36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.483822 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3"} err="failed to get container status \"36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3\": rpc error: code = NotFound desc = could not find container \"36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3\": container with ID starting with 36e3d1bb05418de1b57dd27ab791cd0cca4cc2fc4b7865e5ea9e817f0710d0d3 not found: ID does not exist" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.483843 4880 scope.go:117] "RemoveContainer" containerID="ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60" Dec 01 04:53:56 crc kubenswrapper[4880]: E1201 04:53:56.484318 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60\": container with ID starting with ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60 not found: ID does not exist" containerID="ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.484342 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60"} err="failed to get container status \"ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60\": rpc error: code = NotFound desc = could not find container \"ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60\": container with ID starting with ea3292b536dd257b982521f3a6714de5da992750716eda0cec5bcd65c24b2c60 not found: ID does not exist" Dec 01 04:53:56 crc kubenswrapper[4880]: I1201 04:53:56.810943 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" path="/var/lib/kubelet/pods/c23e0765-f7c7-4e57-baa6-44c8782f51ba/volumes" Dec 01 04:53:58 crc kubenswrapper[4880]: I1201 04:53:58.784207 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:53:58 crc kubenswrapper[4880]: E1201 04:53:58.785008 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.013483 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7hdvv"] Dec 01 04:54:11 crc kubenswrapper[4880]: E1201 04:54:11.014338 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="registry-server" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.014352 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="registry-server" Dec 01 04:54:11 crc kubenswrapper[4880]: E1201 04:54:11.014368 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="extract-content" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.014374 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="extract-content" Dec 01 04:54:11 crc kubenswrapper[4880]: E1201 04:54:11.014402 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="extract-utilities" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.014409 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="extract-utilities" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.014584 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23e0765-f7c7-4e57-baa6-44c8782f51ba" containerName="registry-server" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.015894 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.036948 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hdvv"] Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.191757 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-utilities\") pod \"community-operators-7hdvv\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.191794 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znnwg\" (UniqueName: \"kubernetes.io/projected/2815ab05-0343-4bff-bec7-063d75508ebe-kube-api-access-znnwg\") pod \"community-operators-7hdvv\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.191901 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-catalog-content\") pod \"community-operators-7hdvv\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.293961 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-utilities\") pod \"community-operators-7hdvv\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.294000 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znnwg\" (UniqueName: \"kubernetes.io/projected/2815ab05-0343-4bff-bec7-063d75508ebe-kube-api-access-znnwg\") pod \"community-operators-7hdvv\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.294099 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-catalog-content\") pod \"community-operators-7hdvv\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.294568 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-utilities\") pod \"community-operators-7hdvv\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.294685 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-catalog-content\") pod \"community-operators-7hdvv\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.315761 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znnwg\" (UniqueName: \"kubernetes.io/projected/2815ab05-0343-4bff-bec7-063d75508ebe-kube-api-access-znnwg\") pod \"community-operators-7hdvv\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.394076 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:11 crc kubenswrapper[4880]: I1201 04:54:11.874251 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hdvv"] Dec 01 04:54:12 crc kubenswrapper[4880]: I1201 04:54:12.544002 4880 generic.go:334] "Generic (PLEG): container finished" podID="2815ab05-0343-4bff-bec7-063d75508ebe" containerID="aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce" exitCode=0 Dec 01 04:54:12 crc kubenswrapper[4880]: I1201 04:54:12.544126 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hdvv" event={"ID":"2815ab05-0343-4bff-bec7-063d75508ebe","Type":"ContainerDied","Data":"aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce"} Dec 01 04:54:12 crc kubenswrapper[4880]: I1201 04:54:12.544365 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hdvv" event={"ID":"2815ab05-0343-4bff-bec7-063d75508ebe","Type":"ContainerStarted","Data":"a23d229be34e2820311c086aa0439df417a34b26defca79a654a087fca844322"} Dec 01 04:54:12 crc kubenswrapper[4880]: I1201 04:54:12.785745 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:54:12 crc kubenswrapper[4880]: E1201 04:54:12.786468 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:54:13 crc kubenswrapper[4880]: I1201 04:54:13.556258 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hdvv" event={"ID":"2815ab05-0343-4bff-bec7-063d75508ebe","Type":"ContainerStarted","Data":"fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3"} Dec 01 04:54:14 crc kubenswrapper[4880]: I1201 04:54:14.565652 4880 generic.go:334] "Generic (PLEG): container finished" podID="2815ab05-0343-4bff-bec7-063d75508ebe" containerID="fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3" exitCode=0 Dec 01 04:54:14 crc kubenswrapper[4880]: I1201 04:54:14.565699 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hdvv" event={"ID":"2815ab05-0343-4bff-bec7-063d75508ebe","Type":"ContainerDied","Data":"fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3"} Dec 01 04:54:15 crc kubenswrapper[4880]: I1201 04:54:15.576807 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hdvv" event={"ID":"2815ab05-0343-4bff-bec7-063d75508ebe","Type":"ContainerStarted","Data":"8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65"} Dec 01 04:54:15 crc kubenswrapper[4880]: I1201 04:54:15.616280 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7hdvv" podStartSLOduration=3.092957296 podStartE2EDuration="5.616265795s" podCreationTimestamp="2025-12-01 04:54:10 +0000 UTC" firstStartedPulling="2025-12-01 04:54:12.546017357 +0000 UTC m=+7082.057271729" lastFinishedPulling="2025-12-01 04:54:15.069325856 +0000 UTC m=+7084.580580228" observedRunningTime="2025-12-01 04:54:15.612122453 +0000 UTC m=+7085.123376845" watchObservedRunningTime="2025-12-01 04:54:15.616265795 +0000 UTC m=+7085.127520167" Dec 01 04:54:21 crc kubenswrapper[4880]: I1201 04:54:21.395100 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:21 crc kubenswrapper[4880]: I1201 04:54:21.395598 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:21 crc kubenswrapper[4880]: I1201 04:54:21.482695 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:21 crc kubenswrapper[4880]: I1201 04:54:21.727164 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:21 crc kubenswrapper[4880]: I1201 04:54:21.797688 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hdvv"] Dec 01 04:54:23 crc kubenswrapper[4880]: I1201 04:54:23.663304 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7hdvv" podUID="2815ab05-0343-4bff-bec7-063d75508ebe" containerName="registry-server" containerID="cri-o://8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65" gracePeriod=2 Dec 01 04:54:23 crc kubenswrapper[4880]: I1201 04:54:23.787575 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:54:23 crc kubenswrapper[4880]: E1201 04:54:23.789073 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.178983 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.279676 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znnwg\" (UniqueName: \"kubernetes.io/projected/2815ab05-0343-4bff-bec7-063d75508ebe-kube-api-access-znnwg\") pod \"2815ab05-0343-4bff-bec7-063d75508ebe\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.279744 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-catalog-content\") pod \"2815ab05-0343-4bff-bec7-063d75508ebe\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.279772 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-utilities\") pod \"2815ab05-0343-4bff-bec7-063d75508ebe\" (UID: \"2815ab05-0343-4bff-bec7-063d75508ebe\") " Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.280503 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-utilities" (OuterVolumeSpecName: "utilities") pod "2815ab05-0343-4bff-bec7-063d75508ebe" (UID: "2815ab05-0343-4bff-bec7-063d75508ebe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.290737 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2815ab05-0343-4bff-bec7-063d75508ebe-kube-api-access-znnwg" (OuterVolumeSpecName: "kube-api-access-znnwg") pod "2815ab05-0343-4bff-bec7-063d75508ebe" (UID: "2815ab05-0343-4bff-bec7-063d75508ebe"). InnerVolumeSpecName "kube-api-access-znnwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.335276 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2815ab05-0343-4bff-bec7-063d75508ebe" (UID: "2815ab05-0343-4bff-bec7-063d75508ebe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.384771 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znnwg\" (UniqueName: \"kubernetes.io/projected/2815ab05-0343-4bff-bec7-063d75508ebe-kube-api-access-znnwg\") on node \"crc\" DevicePath \"\"" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.385284 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.385303 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2815ab05-0343-4bff-bec7-063d75508ebe-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.681340 4880 generic.go:334] "Generic (PLEG): container finished" podID="2815ab05-0343-4bff-bec7-063d75508ebe" containerID="8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65" exitCode=0 Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.681405 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hdvv" event={"ID":"2815ab05-0343-4bff-bec7-063d75508ebe","Type":"ContainerDied","Data":"8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65"} Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.681435 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hdvv" event={"ID":"2815ab05-0343-4bff-bec7-063d75508ebe","Type":"ContainerDied","Data":"a23d229be34e2820311c086aa0439df417a34b26defca79a654a087fca844322"} Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.681455 4880 scope.go:117] "RemoveContainer" containerID="8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.681597 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hdvv" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.725358 4880 scope.go:117] "RemoveContainer" containerID="fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.727040 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hdvv"] Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.739217 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7hdvv"] Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.756759 4880 scope.go:117] "RemoveContainer" containerID="aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.804184 4880 scope.go:117] "RemoveContainer" containerID="8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65" Dec 01 04:54:24 crc kubenswrapper[4880]: E1201 04:54:24.804569 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65\": container with ID starting with 8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65 not found: ID does not exist" containerID="8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.804600 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65"} err="failed to get container status \"8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65\": rpc error: code = NotFound desc = could not find container \"8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65\": container with ID starting with 8301a0dc8f35baff7fe61e07cfb965f8f5dd883807705f0500ae1c7956741c65 not found: ID does not exist" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.804625 4880 scope.go:117] "RemoveContainer" containerID="fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3" Dec 01 04:54:24 crc kubenswrapper[4880]: E1201 04:54:24.805061 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3\": container with ID starting with fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3 not found: ID does not exist" containerID="fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.805088 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3"} err="failed to get container status \"fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3\": rpc error: code = NotFound desc = could not find container \"fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3\": container with ID starting with fc9cb10fe50f9980bd102ad8036736a5bea15885ffebdfb402ddf39bd37b41d3 not found: ID does not exist" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.805104 4880 scope.go:117] "RemoveContainer" containerID="aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce" Dec 01 04:54:24 crc kubenswrapper[4880]: E1201 04:54:24.805481 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce\": container with ID starting with aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce not found: ID does not exist" containerID="aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.805502 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce"} err="failed to get container status \"aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce\": rpc error: code = NotFound desc = could not find container \"aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce\": container with ID starting with aa6d68e4399a743de899239877714d88eedc73cb42ac31ee3d0be9a84b8e30ce not found: ID does not exist" Dec 01 04:54:24 crc kubenswrapper[4880]: I1201 04:54:24.810130 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2815ab05-0343-4bff-bec7-063d75508ebe" path="/var/lib/kubelet/pods/2815ab05-0343-4bff-bec7-063d75508ebe/volumes" Dec 01 04:54:38 crc kubenswrapper[4880]: I1201 04:54:38.785172 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:54:38 crc kubenswrapper[4880]: E1201 04:54:38.789738 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:54:50 crc kubenswrapper[4880]: I1201 04:54:50.803507 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:54:50 crc kubenswrapper[4880]: E1201 04:54:50.809079 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:55:02 crc kubenswrapper[4880]: I1201 04:55:02.785549 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:55:02 crc kubenswrapper[4880]: E1201 04:55:02.789044 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:55:17 crc kubenswrapper[4880]: I1201 04:55:17.784376 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:55:17 crc kubenswrapper[4880]: E1201 04:55:17.785516 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:55:32 crc kubenswrapper[4880]: I1201 04:55:32.784506 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:55:32 crc kubenswrapper[4880]: E1201 04:55:32.785446 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:55:44 crc kubenswrapper[4880]: I1201 04:55:44.784375 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:55:44 crc kubenswrapper[4880]: E1201 04:55:44.785051 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:55:58 crc kubenswrapper[4880]: I1201 04:55:58.783647 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:55:58 crc kubenswrapper[4880]: E1201 04:55:58.784272 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:56:12 crc kubenswrapper[4880]: I1201 04:56:12.785804 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:56:12 crc kubenswrapper[4880]: E1201 04:56:12.787302 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:56:25 crc kubenswrapper[4880]: I1201 04:56:25.783524 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:56:25 crc kubenswrapper[4880]: E1201 04:56:25.784453 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:56:37 crc kubenswrapper[4880]: I1201 04:56:37.784731 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:56:37 crc kubenswrapper[4880]: E1201 04:56:37.785471 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:56:50 crc kubenswrapper[4880]: I1201 04:56:50.798227 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:56:50 crc kubenswrapper[4880]: E1201 04:56:50.799219 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.724463 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9jclf"] Dec 01 04:56:53 crc kubenswrapper[4880]: E1201 04:56:53.725493 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2815ab05-0343-4bff-bec7-063d75508ebe" containerName="registry-server" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.725533 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2815ab05-0343-4bff-bec7-063d75508ebe" containerName="registry-server" Dec 01 04:56:53 crc kubenswrapper[4880]: E1201 04:56:53.725563 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2815ab05-0343-4bff-bec7-063d75508ebe" containerName="extract-utilities" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.725574 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2815ab05-0343-4bff-bec7-063d75508ebe" containerName="extract-utilities" Dec 01 04:56:53 crc kubenswrapper[4880]: E1201 04:56:53.725594 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2815ab05-0343-4bff-bec7-063d75508ebe" containerName="extract-content" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.725602 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2815ab05-0343-4bff-bec7-063d75508ebe" containerName="extract-content" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.725921 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="2815ab05-0343-4bff-bec7-063d75508ebe" containerName="registry-server" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.727710 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.741025 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jclf"] Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.834837 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-utilities\") pod \"redhat-marketplace-9jclf\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.834961 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-catalog-content\") pod \"redhat-marketplace-9jclf\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.835174 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgc5\" (UniqueName: \"kubernetes.io/projected/34265544-5371-47b6-a687-36f045249650-kube-api-access-wfgc5\") pod \"redhat-marketplace-9jclf\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.913071 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7bbp6"] Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.915353 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.925749 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7bbp6"] Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.936981 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-utilities\") pod \"certified-operators-7bbp6\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.937042 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-catalog-content\") pod \"certified-operators-7bbp6\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.937144 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfgc5\" (UniqueName: \"kubernetes.io/projected/34265544-5371-47b6-a687-36f045249650-kube-api-access-wfgc5\") pod \"redhat-marketplace-9jclf\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.937302 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k95k5\" (UniqueName: \"kubernetes.io/projected/c3452802-1db9-41cb-9855-6bdb100984c1-kube-api-access-k95k5\") pod \"certified-operators-7bbp6\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.937335 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-utilities\") pod \"redhat-marketplace-9jclf\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.937373 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-catalog-content\") pod \"redhat-marketplace-9jclf\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.937936 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-utilities\") pod \"redhat-marketplace-9jclf\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.937958 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-catalog-content\") pod \"redhat-marketplace-9jclf\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:53 crc kubenswrapper[4880]: I1201 04:56:53.975045 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfgc5\" (UniqueName: \"kubernetes.io/projected/34265544-5371-47b6-a687-36f045249650-kube-api-access-wfgc5\") pod \"redhat-marketplace-9jclf\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.040554 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k95k5\" (UniqueName: \"kubernetes.io/projected/c3452802-1db9-41cb-9855-6bdb100984c1-kube-api-access-k95k5\") pod \"certified-operators-7bbp6\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.040646 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-utilities\") pod \"certified-operators-7bbp6\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.040678 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-catalog-content\") pod \"certified-operators-7bbp6\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.041191 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-utilities\") pod \"certified-operators-7bbp6\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.041296 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-catalog-content\") pod \"certified-operators-7bbp6\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.047996 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.065625 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k95k5\" (UniqueName: \"kubernetes.io/projected/c3452802-1db9-41cb-9855-6bdb100984c1-kube-api-access-k95k5\") pod \"certified-operators-7bbp6\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.232092 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.601519 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jclf"] Dec 01 04:56:54 crc kubenswrapper[4880]: I1201 04:56:54.752850 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7bbp6"] Dec 01 04:56:54 crc kubenswrapper[4880]: W1201 04:56:54.754903 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3452802_1db9_41cb_9855_6bdb100984c1.slice/crio-5cb68cd2e4f94f4fb2b82c6e54c535db5a7b417fc99b82c9c4290601c373bd11 WatchSource:0}: Error finding container 5cb68cd2e4f94f4fb2b82c6e54c535db5a7b417fc99b82c9c4290601c373bd11: Status 404 returned error can't find the container with id 5cb68cd2e4f94f4fb2b82c6e54c535db5a7b417fc99b82c9c4290601c373bd11 Dec 01 04:56:55 crc kubenswrapper[4880]: I1201 04:56:55.363296 4880 generic.go:334] "Generic (PLEG): container finished" podID="c3452802-1db9-41cb-9855-6bdb100984c1" containerID="2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380" exitCode=0 Dec 01 04:56:55 crc kubenswrapper[4880]: I1201 04:56:55.363392 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7bbp6" event={"ID":"c3452802-1db9-41cb-9855-6bdb100984c1","Type":"ContainerDied","Data":"2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380"} Dec 01 04:56:55 crc kubenswrapper[4880]: I1201 04:56:55.363605 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7bbp6" event={"ID":"c3452802-1db9-41cb-9855-6bdb100984c1","Type":"ContainerStarted","Data":"5cb68cd2e4f94f4fb2b82c6e54c535db5a7b417fc99b82c9c4290601c373bd11"} Dec 01 04:56:55 crc kubenswrapper[4880]: I1201 04:56:55.366529 4880 generic.go:334] "Generic (PLEG): container finished" podID="34265544-5371-47b6-a687-36f045249650" containerID="8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145" exitCode=0 Dec 01 04:56:55 crc kubenswrapper[4880]: I1201 04:56:55.366567 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jclf" event={"ID":"34265544-5371-47b6-a687-36f045249650","Type":"ContainerDied","Data":"8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145"} Dec 01 04:56:55 crc kubenswrapper[4880]: I1201 04:56:55.366587 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jclf" event={"ID":"34265544-5371-47b6-a687-36f045249650","Type":"ContainerStarted","Data":"c90d3b4a27030cd346961666c83a3789c029c15c3e93b88f9c3d6c92daba9621"} Dec 01 04:56:56 crc kubenswrapper[4880]: I1201 04:56:56.378476 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jclf" event={"ID":"34265544-5371-47b6-a687-36f045249650","Type":"ContainerStarted","Data":"344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa"} Dec 01 04:56:56 crc kubenswrapper[4880]: I1201 04:56:56.381908 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7bbp6" event={"ID":"c3452802-1db9-41cb-9855-6bdb100984c1","Type":"ContainerStarted","Data":"b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd"} Dec 01 04:56:58 crc kubenswrapper[4880]: I1201 04:56:58.402622 4880 generic.go:334] "Generic (PLEG): container finished" podID="c3452802-1db9-41cb-9855-6bdb100984c1" containerID="b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd" exitCode=0 Dec 01 04:56:58 crc kubenswrapper[4880]: I1201 04:56:58.402794 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7bbp6" event={"ID":"c3452802-1db9-41cb-9855-6bdb100984c1","Type":"ContainerDied","Data":"b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd"} Dec 01 04:56:58 crc kubenswrapper[4880]: I1201 04:56:58.407862 4880 generic.go:334] "Generic (PLEG): container finished" podID="34265544-5371-47b6-a687-36f045249650" containerID="344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa" exitCode=0 Dec 01 04:56:58 crc kubenswrapper[4880]: I1201 04:56:58.407953 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jclf" event={"ID":"34265544-5371-47b6-a687-36f045249650","Type":"ContainerDied","Data":"344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa"} Dec 01 04:56:59 crc kubenswrapper[4880]: I1201 04:56:59.420318 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jclf" event={"ID":"34265544-5371-47b6-a687-36f045249650","Type":"ContainerStarted","Data":"33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26"} Dec 01 04:56:59 crc kubenswrapper[4880]: I1201 04:56:59.424207 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7bbp6" event={"ID":"c3452802-1db9-41cb-9855-6bdb100984c1","Type":"ContainerStarted","Data":"58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795"} Dec 01 04:56:59 crc kubenswrapper[4880]: I1201 04:56:59.443624 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9jclf" podStartSLOduration=2.825707208 podStartE2EDuration="6.443606073s" podCreationTimestamp="2025-12-01 04:56:53 +0000 UTC" firstStartedPulling="2025-12-01 04:56:55.367930066 +0000 UTC m=+7244.879184438" lastFinishedPulling="2025-12-01 04:56:58.985828921 +0000 UTC m=+7248.497083303" observedRunningTime="2025-12-01 04:56:59.435239448 +0000 UTC m=+7248.946493830" watchObservedRunningTime="2025-12-01 04:56:59.443606073 +0000 UTC m=+7248.954860445" Dec 01 04:56:59 crc kubenswrapper[4880]: I1201 04:56:59.459766 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7bbp6" podStartSLOduration=2.923584688 podStartE2EDuration="6.459750169s" podCreationTimestamp="2025-12-01 04:56:53 +0000 UTC" firstStartedPulling="2025-12-01 04:56:55.36522653 +0000 UTC m=+7244.876480902" lastFinishedPulling="2025-12-01 04:56:58.901392001 +0000 UTC m=+7248.412646383" observedRunningTime="2025-12-01 04:56:59.452752718 +0000 UTC m=+7248.964007090" watchObservedRunningTime="2025-12-01 04:56:59.459750169 +0000 UTC m=+7248.971004541" Dec 01 04:57:02 crc kubenswrapper[4880]: I1201 04:57:02.784450 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:57:02 crc kubenswrapper[4880]: E1201 04:57:02.785329 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:57:04 crc kubenswrapper[4880]: I1201 04:57:04.049115 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:57:04 crc kubenswrapper[4880]: I1201 04:57:04.049421 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:57:04 crc kubenswrapper[4880]: I1201 04:57:04.121941 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:57:04 crc kubenswrapper[4880]: I1201 04:57:04.233125 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:57:04 crc kubenswrapper[4880]: I1201 04:57:04.233743 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:57:04 crc kubenswrapper[4880]: I1201 04:57:04.279108 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:57:04 crc kubenswrapper[4880]: I1201 04:57:04.554030 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:57:04 crc kubenswrapper[4880]: I1201 04:57:04.556133 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:57:05 crc kubenswrapper[4880]: I1201 04:57:05.963306 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7bbp6"] Dec 01 04:57:06 crc kubenswrapper[4880]: I1201 04:57:06.497549 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7bbp6" podUID="c3452802-1db9-41cb-9855-6bdb100984c1" containerName="registry-server" containerID="cri-o://58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795" gracePeriod=2 Dec 01 04:57:06 crc kubenswrapper[4880]: I1201 04:57:06.964346 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jclf"] Dec 01 04:57:06 crc kubenswrapper[4880]: I1201 04:57:06.964587 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9jclf" podUID="34265544-5371-47b6-a687-36f045249650" containerName="registry-server" containerID="cri-o://33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26" gracePeriod=2 Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.148999 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.330642 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k95k5\" (UniqueName: \"kubernetes.io/projected/c3452802-1db9-41cb-9855-6bdb100984c1-kube-api-access-k95k5\") pod \"c3452802-1db9-41cb-9855-6bdb100984c1\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.330890 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-utilities\") pod \"c3452802-1db9-41cb-9855-6bdb100984c1\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.331038 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-catalog-content\") pod \"c3452802-1db9-41cb-9855-6bdb100984c1\" (UID: \"c3452802-1db9-41cb-9855-6bdb100984c1\") " Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.333225 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-utilities" (OuterVolumeSpecName: "utilities") pod "c3452802-1db9-41cb-9855-6bdb100984c1" (UID: "c3452802-1db9-41cb-9855-6bdb100984c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.337585 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3452802-1db9-41cb-9855-6bdb100984c1-kube-api-access-k95k5" (OuterVolumeSpecName: "kube-api-access-k95k5") pod "c3452802-1db9-41cb-9855-6bdb100984c1" (UID: "c3452802-1db9-41cb-9855-6bdb100984c1"). InnerVolumeSpecName "kube-api-access-k95k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.392212 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3452802-1db9-41cb-9855-6bdb100984c1" (UID: "c3452802-1db9-41cb-9855-6bdb100984c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.398184 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.433699 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfgc5\" (UniqueName: \"kubernetes.io/projected/34265544-5371-47b6-a687-36f045249650-kube-api-access-wfgc5\") pod \"34265544-5371-47b6-a687-36f045249650\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.434341 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k95k5\" (UniqueName: \"kubernetes.io/projected/c3452802-1db9-41cb-9855-6bdb100984c1-kube-api-access-k95k5\") on node \"crc\" DevicePath \"\"" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.434363 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.434375 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3452802-1db9-41cb-9855-6bdb100984c1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.437694 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34265544-5371-47b6-a687-36f045249650-kube-api-access-wfgc5" (OuterVolumeSpecName: "kube-api-access-wfgc5") pod "34265544-5371-47b6-a687-36f045249650" (UID: "34265544-5371-47b6-a687-36f045249650"). InnerVolumeSpecName "kube-api-access-wfgc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.506418 4880 generic.go:334] "Generic (PLEG): container finished" podID="34265544-5371-47b6-a687-36f045249650" containerID="33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26" exitCode=0 Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.506478 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jclf" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.506487 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jclf" event={"ID":"34265544-5371-47b6-a687-36f045249650","Type":"ContainerDied","Data":"33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26"} Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.506559 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jclf" event={"ID":"34265544-5371-47b6-a687-36f045249650","Type":"ContainerDied","Data":"c90d3b4a27030cd346961666c83a3789c029c15c3e93b88f9c3d6c92daba9621"} Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.506580 4880 scope.go:117] "RemoveContainer" containerID="33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.508640 4880 generic.go:334] "Generic (PLEG): container finished" podID="c3452802-1db9-41cb-9855-6bdb100984c1" containerID="58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795" exitCode=0 Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.508674 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7bbp6" event={"ID":"c3452802-1db9-41cb-9855-6bdb100984c1","Type":"ContainerDied","Data":"58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795"} Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.508702 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7bbp6" event={"ID":"c3452802-1db9-41cb-9855-6bdb100984c1","Type":"ContainerDied","Data":"5cb68cd2e4f94f4fb2b82c6e54c535db5a7b417fc99b82c9c4290601c373bd11"} Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.508770 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7bbp6" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.541514 4880 scope.go:117] "RemoveContainer" containerID="344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.542689 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-utilities\") pod \"34265544-5371-47b6-a687-36f045249650\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.542750 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-catalog-content\") pod \"34265544-5371-47b6-a687-36f045249650\" (UID: \"34265544-5371-47b6-a687-36f045249650\") " Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.543292 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfgc5\" (UniqueName: \"kubernetes.io/projected/34265544-5371-47b6-a687-36f045249650-kube-api-access-wfgc5\") on node \"crc\" DevicePath \"\"" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.543681 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-utilities" (OuterVolumeSpecName: "utilities") pod "34265544-5371-47b6-a687-36f045249650" (UID: "34265544-5371-47b6-a687-36f045249650"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.545907 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7bbp6"] Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.553935 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7bbp6"] Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.563985 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34265544-5371-47b6-a687-36f045249650" (UID: "34265544-5371-47b6-a687-36f045249650"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.572398 4880 scope.go:117] "RemoveContainer" containerID="8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.597657 4880 scope.go:117] "RemoveContainer" containerID="33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26" Dec 01 04:57:07 crc kubenswrapper[4880]: E1201 04:57:07.598091 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26\": container with ID starting with 33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26 not found: ID does not exist" containerID="33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.598293 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26"} err="failed to get container status \"33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26\": rpc error: code = NotFound desc = could not find container \"33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26\": container with ID starting with 33eee7614821d1aa1445da1893854333ef97d7e9336d318482ef78abf6408b26 not found: ID does not exist" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.598377 4880 scope.go:117] "RemoveContainer" containerID="344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa" Dec 01 04:57:07 crc kubenswrapper[4880]: E1201 04:57:07.598703 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa\": container with ID starting with 344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa not found: ID does not exist" containerID="344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.598724 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa"} err="failed to get container status \"344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa\": rpc error: code = NotFound desc = could not find container \"344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa\": container with ID starting with 344dcd7f7ea76cdbdc5f1f5952cd8fcfcd854c1a677e8db50ae6918813917afa not found: ID does not exist" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.598736 4880 scope.go:117] "RemoveContainer" containerID="8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145" Dec 01 04:57:07 crc kubenswrapper[4880]: E1201 04:57:07.598956 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145\": container with ID starting with 8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145 not found: ID does not exist" containerID="8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.599062 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145"} err="failed to get container status \"8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145\": rpc error: code = NotFound desc = could not find container \"8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145\": container with ID starting with 8b8add0174b1b2d8fe96785e45e97d369c141a16fdf5f52d474f5f64bcbd9145 not found: ID does not exist" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.599139 4880 scope.go:117] "RemoveContainer" containerID="58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.618404 4880 scope.go:117] "RemoveContainer" containerID="b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.644597 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.644628 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34265544-5371-47b6-a687-36f045249650-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.672310 4880 scope.go:117] "RemoveContainer" containerID="2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.713796 4880 scope.go:117] "RemoveContainer" containerID="58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795" Dec 01 04:57:07 crc kubenswrapper[4880]: E1201 04:57:07.714277 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795\": container with ID starting with 58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795 not found: ID does not exist" containerID="58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.714306 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795"} err="failed to get container status \"58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795\": rpc error: code = NotFound desc = could not find container \"58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795\": container with ID starting with 58dde2b000e1c754086f5490a886f278ee301caa41a5bce2cee83eddef513795 not found: ID does not exist" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.714325 4880 scope.go:117] "RemoveContainer" containerID="b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd" Dec 01 04:57:07 crc kubenswrapper[4880]: E1201 04:57:07.714592 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd\": container with ID starting with b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd not found: ID does not exist" containerID="b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.714612 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd"} err="failed to get container status \"b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd\": rpc error: code = NotFound desc = could not find container \"b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd\": container with ID starting with b1ea219540bc15eeb13c54dbde1e2baefc489c8d7c717341e533bf096e7e9bfd not found: ID does not exist" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.714623 4880 scope.go:117] "RemoveContainer" containerID="2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380" Dec 01 04:57:07 crc kubenswrapper[4880]: E1201 04:57:07.714917 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380\": container with ID starting with 2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380 not found: ID does not exist" containerID="2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.714936 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380"} err="failed to get container status \"2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380\": rpc error: code = NotFound desc = could not find container \"2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380\": container with ID starting with 2805dde8831fecf957b9854b55efdbe1e843378e61099392f51f4c697b73e380 not found: ID does not exist" Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.836850 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jclf"] Dec 01 04:57:07 crc kubenswrapper[4880]: I1201 04:57:07.844287 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jclf"] Dec 01 04:57:08 crc kubenswrapper[4880]: I1201 04:57:08.796368 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34265544-5371-47b6-a687-36f045249650" path="/var/lib/kubelet/pods/34265544-5371-47b6-a687-36f045249650/volumes" Dec 01 04:57:08 crc kubenswrapper[4880]: I1201 04:57:08.797407 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3452802-1db9-41cb-9855-6bdb100984c1" path="/var/lib/kubelet/pods/c3452802-1db9-41cb-9855-6bdb100984c1/volumes" Dec 01 04:57:13 crc kubenswrapper[4880]: I1201 04:57:13.783937 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:57:13 crc kubenswrapper[4880]: E1201 04:57:13.784536 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:57:25 crc kubenswrapper[4880]: I1201 04:57:25.785266 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:57:25 crc kubenswrapper[4880]: E1201 04:57:25.785952 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:57:39 crc kubenswrapper[4880]: I1201 04:57:39.784328 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:57:39 crc kubenswrapper[4880]: E1201 04:57:39.785163 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:57:52 crc kubenswrapper[4880]: I1201 04:57:52.783886 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:57:52 crc kubenswrapper[4880]: E1201 04:57:52.784741 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:58:06 crc kubenswrapper[4880]: I1201 04:58:06.783979 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:58:06 crc kubenswrapper[4880]: E1201 04:58:06.784605 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:58:17 crc kubenswrapper[4880]: I1201 04:58:17.784855 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:58:17 crc kubenswrapper[4880]: E1201 04:58:17.785919 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:58:29 crc kubenswrapper[4880]: I1201 04:58:29.783796 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:58:29 crc kubenswrapper[4880]: E1201 04:58:29.784572 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:58:44 crc kubenswrapper[4880]: I1201 04:58:44.786689 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:58:44 crc kubenswrapper[4880]: E1201 04:58:44.787576 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 04:58:59 crc kubenswrapper[4880]: I1201 04:58:59.784476 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 04:59:00 crc kubenswrapper[4880]: I1201 04:59:00.666489 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"8aa8403e519d191453d175aa7e1c7a6aabf8297c94d476bbb94920ff9c5a6461"} Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.215624 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4"] Dec 01 05:00:00 crc kubenswrapper[4880]: E1201 05:00:00.216498 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34265544-5371-47b6-a687-36f045249650" containerName="extract-utilities" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.216513 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="34265544-5371-47b6-a687-36f045249650" containerName="extract-utilities" Dec 01 05:00:00 crc kubenswrapper[4880]: E1201 05:00:00.216528 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34265544-5371-47b6-a687-36f045249650" containerName="registry-server" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.216536 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="34265544-5371-47b6-a687-36f045249650" containerName="registry-server" Dec 01 05:00:00 crc kubenswrapper[4880]: E1201 05:00:00.216549 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3452802-1db9-41cb-9855-6bdb100984c1" containerName="extract-content" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.216557 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3452802-1db9-41cb-9855-6bdb100984c1" containerName="extract-content" Dec 01 05:00:00 crc kubenswrapper[4880]: E1201 05:00:00.216571 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34265544-5371-47b6-a687-36f045249650" containerName="extract-content" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.216578 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="34265544-5371-47b6-a687-36f045249650" containerName="extract-content" Dec 01 05:00:00 crc kubenswrapper[4880]: E1201 05:00:00.216597 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3452802-1db9-41cb-9855-6bdb100984c1" containerName="registry-server" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.216604 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3452802-1db9-41cb-9855-6bdb100984c1" containerName="registry-server" Dec 01 05:00:00 crc kubenswrapper[4880]: E1201 05:00:00.216617 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3452802-1db9-41cb-9855-6bdb100984c1" containerName="extract-utilities" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.216626 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3452802-1db9-41cb-9855-6bdb100984c1" containerName="extract-utilities" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.216838 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="34265544-5371-47b6-a687-36f045249650" containerName="registry-server" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.216859 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3452802-1db9-41cb-9855-6bdb100984c1" containerName="registry-server" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.217595 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.224549 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.224549 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.236573 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4"] Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.294808 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77b31c5-c4b9-4604-808e-653a65764a89-secret-volume\") pod \"collect-profiles-29409420-nvdt4\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.294844 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77b31c5-c4b9-4604-808e-653a65764a89-config-volume\") pod \"collect-profiles-29409420-nvdt4\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.294977 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vkz\" (UniqueName: \"kubernetes.io/projected/c77b31c5-c4b9-4604-808e-653a65764a89-kube-api-access-z9vkz\") pod \"collect-profiles-29409420-nvdt4\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.396994 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77b31c5-c4b9-4604-808e-653a65764a89-config-volume\") pod \"collect-profiles-29409420-nvdt4\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.397116 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vkz\" (UniqueName: \"kubernetes.io/projected/c77b31c5-c4b9-4604-808e-653a65764a89-kube-api-access-z9vkz\") pod \"collect-profiles-29409420-nvdt4\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.397289 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77b31c5-c4b9-4604-808e-653a65764a89-secret-volume\") pod \"collect-profiles-29409420-nvdt4\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.397837 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77b31c5-c4b9-4604-808e-653a65764a89-config-volume\") pod \"collect-profiles-29409420-nvdt4\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.408601 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77b31c5-c4b9-4604-808e-653a65764a89-secret-volume\") pod \"collect-profiles-29409420-nvdt4\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.413688 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vkz\" (UniqueName: \"kubernetes.io/projected/c77b31c5-c4b9-4604-808e-653a65764a89-kube-api-access-z9vkz\") pod \"collect-profiles-29409420-nvdt4\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.553466 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:00 crc kubenswrapper[4880]: I1201 05:00:00.889460 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4"] Dec 01 05:00:01 crc kubenswrapper[4880]: I1201 05:00:01.445304 4880 generic.go:334] "Generic (PLEG): container finished" podID="c77b31c5-c4b9-4604-808e-653a65764a89" containerID="ea4c7bd59f6db72a7d64fa78c843a7625b49722a60f09590d1eace30f83d8bd2" exitCode=0 Dec 01 05:00:01 crc kubenswrapper[4880]: I1201 05:00:01.445353 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" event={"ID":"c77b31c5-c4b9-4604-808e-653a65764a89","Type":"ContainerDied","Data":"ea4c7bd59f6db72a7d64fa78c843a7625b49722a60f09590d1eace30f83d8bd2"} Dec 01 05:00:01 crc kubenswrapper[4880]: I1201 05:00:01.445385 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" event={"ID":"c77b31c5-c4b9-4604-808e-653a65764a89","Type":"ContainerStarted","Data":"0330c022421d177107d69f206042ba1e72fc73496cfe6d38101bf7c0d04b595b"} Dec 01 05:00:02 crc kubenswrapper[4880]: I1201 05:00:02.899020 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.053995 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77b31c5-c4b9-4604-808e-653a65764a89-secret-volume\") pod \"c77b31c5-c4b9-4604-808e-653a65764a89\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.054175 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9vkz\" (UniqueName: \"kubernetes.io/projected/c77b31c5-c4b9-4604-808e-653a65764a89-kube-api-access-z9vkz\") pod \"c77b31c5-c4b9-4604-808e-653a65764a89\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.054276 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77b31c5-c4b9-4604-808e-653a65764a89-config-volume\") pod \"c77b31c5-c4b9-4604-808e-653a65764a89\" (UID: \"c77b31c5-c4b9-4604-808e-653a65764a89\") " Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.055224 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c77b31c5-c4b9-4604-808e-653a65764a89-config-volume" (OuterVolumeSpecName: "config-volume") pod "c77b31c5-c4b9-4604-808e-653a65764a89" (UID: "c77b31c5-c4b9-4604-808e-653a65764a89"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.059794 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c77b31c5-c4b9-4604-808e-653a65764a89-kube-api-access-z9vkz" (OuterVolumeSpecName: "kube-api-access-z9vkz") pod "c77b31c5-c4b9-4604-808e-653a65764a89" (UID: "c77b31c5-c4b9-4604-808e-653a65764a89"). InnerVolumeSpecName "kube-api-access-z9vkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.059908 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77b31c5-c4b9-4604-808e-653a65764a89-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c77b31c5-c4b9-4604-808e-653a65764a89" (UID: "c77b31c5-c4b9-4604-808e-653a65764a89"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.156681 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9vkz\" (UniqueName: \"kubernetes.io/projected/c77b31c5-c4b9-4604-808e-653a65764a89-kube-api-access-z9vkz\") on node \"crc\" DevicePath \"\"" Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.156713 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77b31c5-c4b9-4604-808e-653a65764a89-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.156722 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77b31c5-c4b9-4604-808e-653a65764a89-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.466478 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" event={"ID":"c77b31c5-c4b9-4604-808e-653a65764a89","Type":"ContainerDied","Data":"0330c022421d177107d69f206042ba1e72fc73496cfe6d38101bf7c0d04b595b"} Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.466528 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4" Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.466535 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0330c022421d177107d69f206042ba1e72fc73496cfe6d38101bf7c0d04b595b" Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.990589 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t"] Dec 01 05:00:03 crc kubenswrapper[4880]: I1201 05:00:03.998819 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409375-9666t"] Dec 01 05:00:04 crc kubenswrapper[4880]: I1201 05:00:04.799147 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="093c5eb6-5fc7-4bd2-8483-16dd812da6b5" path="/var/lib/kubelet/pods/093c5eb6-5fc7-4bd2-8483-16dd812da6b5/volumes" Dec 01 05:00:45 crc kubenswrapper[4880]: I1201 05:00:45.256220 4880 scope.go:117] "RemoveContainer" containerID="e317f322db4e1e7d2f245346fae56a9129b1db56324632e4ce68f2e5ccc11d2a" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.153315 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29409421-zsxm7"] Dec 01 05:01:00 crc kubenswrapper[4880]: E1201 05:01:00.154318 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77b31c5-c4b9-4604-808e-653a65764a89" containerName="collect-profiles" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.154336 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77b31c5-c4b9-4604-808e-653a65764a89" containerName="collect-profiles" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.154576 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77b31c5-c4b9-4604-808e-653a65764a89" containerName="collect-profiles" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.155292 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.184310 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29409421-zsxm7"] Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.337645 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g27r6\" (UniqueName: \"kubernetes.io/projected/aa72da2e-8e61-47d2-a13d-93498dead267-kube-api-access-g27r6\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.337713 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-config-data\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.337835 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-combined-ca-bundle\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.337860 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-fernet-keys\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.439855 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g27r6\" (UniqueName: \"kubernetes.io/projected/aa72da2e-8e61-47d2-a13d-93498dead267-kube-api-access-g27r6\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.440056 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-config-data\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.440251 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-combined-ca-bundle\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.440301 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-fernet-keys\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.448826 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-fernet-keys\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.449651 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-config-data\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.454556 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-combined-ca-bundle\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.458792 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g27r6\" (UniqueName: \"kubernetes.io/projected/aa72da2e-8e61-47d2-a13d-93498dead267-kube-api-access-g27r6\") pod \"keystone-cron-29409421-zsxm7\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.474725 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:00 crc kubenswrapper[4880]: I1201 05:01:00.949408 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29409421-zsxm7"] Dec 01 05:01:00 crc kubenswrapper[4880]: W1201 05:01:00.953143 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa72da2e_8e61_47d2_a13d_93498dead267.slice/crio-505933547e18289b61a7cd5283c8c1f5a137e55b20498f00c19916a38967bb49 WatchSource:0}: Error finding container 505933547e18289b61a7cd5283c8c1f5a137e55b20498f00c19916a38967bb49: Status 404 returned error can't find the container with id 505933547e18289b61a7cd5283c8c1f5a137e55b20498f00c19916a38967bb49 Dec 01 05:01:01 crc kubenswrapper[4880]: I1201 05:01:01.312068 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409421-zsxm7" event={"ID":"aa72da2e-8e61-47d2-a13d-93498dead267","Type":"ContainerStarted","Data":"179c276a9959fbefa2e9e1124210b7213b68ed5014c8d8f9300f78dfb87ad6d8"} Dec 01 05:01:01 crc kubenswrapper[4880]: I1201 05:01:01.312360 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409421-zsxm7" event={"ID":"aa72da2e-8e61-47d2-a13d-93498dead267","Type":"ContainerStarted","Data":"505933547e18289b61a7cd5283c8c1f5a137e55b20498f00c19916a38967bb49"} Dec 01 05:01:01 crc kubenswrapper[4880]: I1201 05:01:01.339821 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29409421-zsxm7" podStartSLOduration=1.339802618 podStartE2EDuration="1.339802618s" podCreationTimestamp="2025-12-01 05:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 05:01:01.325693682 +0000 UTC m=+7490.836948054" watchObservedRunningTime="2025-12-01 05:01:01.339802618 +0000 UTC m=+7490.851056990" Dec 01 05:01:04 crc kubenswrapper[4880]: I1201 05:01:04.346389 4880 generic.go:334] "Generic (PLEG): container finished" podID="aa72da2e-8e61-47d2-a13d-93498dead267" containerID="179c276a9959fbefa2e9e1124210b7213b68ed5014c8d8f9300f78dfb87ad6d8" exitCode=0 Dec 01 05:01:04 crc kubenswrapper[4880]: I1201 05:01:04.346491 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409421-zsxm7" event={"ID":"aa72da2e-8e61-47d2-a13d-93498dead267","Type":"ContainerDied","Data":"179c276a9959fbefa2e9e1124210b7213b68ed5014c8d8f9300f78dfb87ad6d8"} Dec 01 05:01:05 crc kubenswrapper[4880]: I1201 05:01:05.816497 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:05 crc kubenswrapper[4880]: I1201 05:01:05.944208 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-combined-ca-bundle\") pod \"aa72da2e-8e61-47d2-a13d-93498dead267\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " Dec 01 05:01:05 crc kubenswrapper[4880]: I1201 05:01:05.944581 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-fernet-keys\") pod \"aa72da2e-8e61-47d2-a13d-93498dead267\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " Dec 01 05:01:05 crc kubenswrapper[4880]: I1201 05:01:05.944730 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g27r6\" (UniqueName: \"kubernetes.io/projected/aa72da2e-8e61-47d2-a13d-93498dead267-kube-api-access-g27r6\") pod \"aa72da2e-8e61-47d2-a13d-93498dead267\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " Dec 01 05:01:05 crc kubenswrapper[4880]: I1201 05:01:05.944773 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-config-data\") pod \"aa72da2e-8e61-47d2-a13d-93498dead267\" (UID: \"aa72da2e-8e61-47d2-a13d-93498dead267\") " Dec 01 05:01:05 crc kubenswrapper[4880]: I1201 05:01:05.950828 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aa72da2e-8e61-47d2-a13d-93498dead267" (UID: "aa72da2e-8e61-47d2-a13d-93498dead267"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:01:05 crc kubenswrapper[4880]: I1201 05:01:05.951374 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa72da2e-8e61-47d2-a13d-93498dead267-kube-api-access-g27r6" (OuterVolumeSpecName: "kube-api-access-g27r6") pod "aa72da2e-8e61-47d2-a13d-93498dead267" (UID: "aa72da2e-8e61-47d2-a13d-93498dead267"). InnerVolumeSpecName "kube-api-access-g27r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:01:05 crc kubenswrapper[4880]: I1201 05:01:05.979031 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa72da2e-8e61-47d2-a13d-93498dead267" (UID: "aa72da2e-8e61-47d2-a13d-93498dead267"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:01:06 crc kubenswrapper[4880]: I1201 05:01:06.005775 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-config-data" (OuterVolumeSpecName: "config-data") pod "aa72da2e-8e61-47d2-a13d-93498dead267" (UID: "aa72da2e-8e61-47d2-a13d-93498dead267"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:01:06 crc kubenswrapper[4880]: I1201 05:01:06.048164 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 05:01:06 crc kubenswrapper[4880]: I1201 05:01:06.048194 4880 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 05:01:06 crc kubenswrapper[4880]: I1201 05:01:06.048204 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g27r6\" (UniqueName: \"kubernetes.io/projected/aa72da2e-8e61-47d2-a13d-93498dead267-kube-api-access-g27r6\") on node \"crc\" DevicePath \"\"" Dec 01 05:01:06 crc kubenswrapper[4880]: I1201 05:01:06.048213 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa72da2e-8e61-47d2-a13d-93498dead267-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 05:01:06 crc kubenswrapper[4880]: I1201 05:01:06.365671 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409421-zsxm7" event={"ID":"aa72da2e-8e61-47d2-a13d-93498dead267","Type":"ContainerDied","Data":"505933547e18289b61a7cd5283c8c1f5a137e55b20498f00c19916a38967bb49"} Dec 01 05:01:06 crc kubenswrapper[4880]: I1201 05:01:06.365735 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="505933547e18289b61a7cd5283c8c1f5a137e55b20498f00c19916a38967bb49" Dec 01 05:01:06 crc kubenswrapper[4880]: I1201 05:01:06.365751 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409421-zsxm7" Dec 01 05:01:17 crc kubenswrapper[4880]: I1201 05:01:17.370143 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:01:17 crc kubenswrapper[4880]: I1201 05:01:17.370791 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:01:47 crc kubenswrapper[4880]: I1201 05:01:47.368711 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:01:47 crc kubenswrapper[4880]: I1201 05:01:47.369207 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:02:17 crc kubenswrapper[4880]: I1201 05:02:17.368792 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:02:17 crc kubenswrapper[4880]: I1201 05:02:17.369365 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:02:17 crc kubenswrapper[4880]: I1201 05:02:17.369422 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 05:02:17 crc kubenswrapper[4880]: I1201 05:02:17.370592 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8aa8403e519d191453d175aa7e1c7a6aabf8297c94d476bbb94920ff9c5a6461"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 05:02:17 crc kubenswrapper[4880]: I1201 05:02:17.370666 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://8aa8403e519d191453d175aa7e1c7a6aabf8297c94d476bbb94920ff9c5a6461" gracePeriod=600 Dec 01 05:02:18 crc kubenswrapper[4880]: I1201 05:02:18.125101 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="8aa8403e519d191453d175aa7e1c7a6aabf8297c94d476bbb94920ff9c5a6461" exitCode=0 Dec 01 05:02:18 crc kubenswrapper[4880]: I1201 05:02:18.125167 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"8aa8403e519d191453d175aa7e1c7a6aabf8297c94d476bbb94920ff9c5a6461"} Dec 01 05:02:18 crc kubenswrapper[4880]: I1201 05:02:18.125489 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3"} Dec 01 05:02:18 crc kubenswrapper[4880]: I1201 05:02:18.125712 4880 scope.go:117] "RemoveContainer" containerID="7d258d28e777e673fe45cd7488a9216cf32069c0e10af8b099e93b4b610215b9" Dec 01 05:04:17 crc kubenswrapper[4880]: I1201 05:04:17.369280 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:04:17 crc kubenswrapper[4880]: I1201 05:04:17.370171 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:04:47 crc kubenswrapper[4880]: I1201 05:04:47.369275 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:04:47 crc kubenswrapper[4880]: I1201 05:04:47.369833 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.275572 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zhr6w"] Dec 01 05:04:59 crc kubenswrapper[4880]: E1201 05:04:59.276489 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa72da2e-8e61-47d2-a13d-93498dead267" containerName="keystone-cron" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.276501 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa72da2e-8e61-47d2-a13d-93498dead267" containerName="keystone-cron" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.276697 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa72da2e-8e61-47d2-a13d-93498dead267" containerName="keystone-cron" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.279524 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.289436 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhr6w"] Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.397341 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-utilities\") pod \"redhat-operators-zhr6w\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.397388 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-catalog-content\") pod \"redhat-operators-zhr6w\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.397415 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr87k\" (UniqueName: \"kubernetes.io/projected/349042bf-7104-4424-abc8-8a180f0a0f5f-kube-api-access-gr87k\") pod \"redhat-operators-zhr6w\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.499531 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-utilities\") pod \"redhat-operators-zhr6w\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.499573 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-catalog-content\") pod \"redhat-operators-zhr6w\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.499597 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr87k\" (UniqueName: \"kubernetes.io/projected/349042bf-7104-4424-abc8-8a180f0a0f5f-kube-api-access-gr87k\") pod \"redhat-operators-zhr6w\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.500115 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-utilities\") pod \"redhat-operators-zhr6w\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.500232 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-catalog-content\") pod \"redhat-operators-zhr6w\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.518997 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr87k\" (UniqueName: \"kubernetes.io/projected/349042bf-7104-4424-abc8-8a180f0a0f5f-kube-api-access-gr87k\") pod \"redhat-operators-zhr6w\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:04:59 crc kubenswrapper[4880]: I1201 05:04:59.601469 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:05:00 crc kubenswrapper[4880]: I1201 05:05:00.278850 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhr6w"] Dec 01 05:05:01 crc kubenswrapper[4880]: I1201 05:05:01.025537 4880 generic.go:334] "Generic (PLEG): container finished" podID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerID="7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b" exitCode=0 Dec 01 05:05:01 crc kubenswrapper[4880]: I1201 05:05:01.025645 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhr6w" event={"ID":"349042bf-7104-4424-abc8-8a180f0a0f5f","Type":"ContainerDied","Data":"7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b"} Dec 01 05:05:01 crc kubenswrapper[4880]: I1201 05:05:01.025934 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhr6w" event={"ID":"349042bf-7104-4424-abc8-8a180f0a0f5f","Type":"ContainerStarted","Data":"ee5a63bd141b3ca34ae316081edc0d3d490349e5cf4472c8a68b8ffb1bfcf7aa"} Dec 01 05:05:01 crc kubenswrapper[4880]: I1201 05:05:01.028342 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 05:05:03 crc kubenswrapper[4880]: I1201 05:05:03.045990 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhr6w" event={"ID":"349042bf-7104-4424-abc8-8a180f0a0f5f","Type":"ContainerStarted","Data":"aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d"} Dec 01 05:05:05 crc kubenswrapper[4880]: I1201 05:05:05.068408 4880 generic.go:334] "Generic (PLEG): container finished" podID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerID="aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d" exitCode=0 Dec 01 05:05:05 crc kubenswrapper[4880]: I1201 05:05:05.068489 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhr6w" event={"ID":"349042bf-7104-4424-abc8-8a180f0a0f5f","Type":"ContainerDied","Data":"aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d"} Dec 01 05:05:06 crc kubenswrapper[4880]: I1201 05:05:06.077858 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhr6w" event={"ID":"349042bf-7104-4424-abc8-8a180f0a0f5f","Type":"ContainerStarted","Data":"8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc"} Dec 01 05:05:06 crc kubenswrapper[4880]: I1201 05:05:06.100370 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zhr6w" podStartSLOduration=2.453004185 podStartE2EDuration="7.10011379s" podCreationTimestamp="2025-12-01 05:04:59 +0000 UTC" firstStartedPulling="2025-12-01 05:05:01.027191026 +0000 UTC m=+7730.538445398" lastFinishedPulling="2025-12-01 05:05:05.674300601 +0000 UTC m=+7735.185555003" observedRunningTime="2025-12-01 05:05:06.095100647 +0000 UTC m=+7735.606355019" watchObservedRunningTime="2025-12-01 05:05:06.10011379 +0000 UTC m=+7735.611368162" Dec 01 05:05:09 crc kubenswrapper[4880]: I1201 05:05:09.602825 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:05:09 crc kubenswrapper[4880]: I1201 05:05:09.603329 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:05:10 crc kubenswrapper[4880]: I1201 05:05:10.650988 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zhr6w" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="registry-server" probeResult="failure" output=< Dec 01 05:05:10 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 05:05:10 crc kubenswrapper[4880]: > Dec 01 05:05:17 crc kubenswrapper[4880]: I1201 05:05:17.368626 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:05:17 crc kubenswrapper[4880]: I1201 05:05:17.370226 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:05:17 crc kubenswrapper[4880]: I1201 05:05:17.370414 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 05:05:17 crc kubenswrapper[4880]: I1201 05:05:17.371386 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 05:05:17 crc kubenswrapper[4880]: I1201 05:05:17.371552 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" gracePeriod=600 Dec 01 05:05:17 crc kubenswrapper[4880]: E1201 05:05:17.511907 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:05:18 crc kubenswrapper[4880]: I1201 05:05:18.197044 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" exitCode=0 Dec 01 05:05:18 crc kubenswrapper[4880]: I1201 05:05:18.197157 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3"} Dec 01 05:05:18 crc kubenswrapper[4880]: I1201 05:05:18.197457 4880 scope.go:117] "RemoveContainer" containerID="8aa8403e519d191453d175aa7e1c7a6aabf8297c94d476bbb94920ff9c5a6461" Dec 01 05:05:18 crc kubenswrapper[4880]: I1201 05:05:18.198183 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:05:18 crc kubenswrapper[4880]: E1201 05:05:18.198531 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:05:20 crc kubenswrapper[4880]: I1201 05:05:20.730887 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zhr6w" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="registry-server" probeResult="failure" output=< Dec 01 05:05:20 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 05:05:20 crc kubenswrapper[4880]: > Dec 01 05:05:24 crc kubenswrapper[4880]: I1201 05:05:24.926351 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bnhfj"] Dec 01 05:05:24 crc kubenswrapper[4880]: I1201 05:05:24.929189 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:24 crc kubenswrapper[4880]: I1201 05:05:24.949297 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-catalog-content\") pod \"community-operators-bnhfj\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:24 crc kubenswrapper[4880]: I1201 05:05:24.949390 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4x6s\" (UniqueName: \"kubernetes.io/projected/9ac8a541-5958-43ab-bf5c-845e08210f39-kube-api-access-p4x6s\") pod \"community-operators-bnhfj\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:24 crc kubenswrapper[4880]: I1201 05:05:24.949420 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-utilities\") pod \"community-operators-bnhfj\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:24 crc kubenswrapper[4880]: I1201 05:05:24.984932 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnhfj"] Dec 01 05:05:25 crc kubenswrapper[4880]: I1201 05:05:25.049988 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-catalog-content\") pod \"community-operators-bnhfj\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:25 crc kubenswrapper[4880]: I1201 05:05:25.050269 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4x6s\" (UniqueName: \"kubernetes.io/projected/9ac8a541-5958-43ab-bf5c-845e08210f39-kube-api-access-p4x6s\") pod \"community-operators-bnhfj\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:25 crc kubenswrapper[4880]: I1201 05:05:25.050300 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-utilities\") pod \"community-operators-bnhfj\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:25 crc kubenswrapper[4880]: I1201 05:05:25.050644 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-utilities\") pod \"community-operators-bnhfj\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:25 crc kubenswrapper[4880]: I1201 05:05:25.050814 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-catalog-content\") pod \"community-operators-bnhfj\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:25 crc kubenswrapper[4880]: I1201 05:05:25.070958 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4x6s\" (UniqueName: \"kubernetes.io/projected/9ac8a541-5958-43ab-bf5c-845e08210f39-kube-api-access-p4x6s\") pod \"community-operators-bnhfj\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:25 crc kubenswrapper[4880]: I1201 05:05:25.294826 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:25 crc kubenswrapper[4880]: I1201 05:05:25.825542 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bnhfj"] Dec 01 05:05:26 crc kubenswrapper[4880]: I1201 05:05:26.280709 4880 generic.go:334] "Generic (PLEG): container finished" podID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerID="f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6" exitCode=0 Dec 01 05:05:26 crc kubenswrapper[4880]: I1201 05:05:26.280753 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnhfj" event={"ID":"9ac8a541-5958-43ab-bf5c-845e08210f39","Type":"ContainerDied","Data":"f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6"} Dec 01 05:05:26 crc kubenswrapper[4880]: I1201 05:05:26.280975 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnhfj" event={"ID":"9ac8a541-5958-43ab-bf5c-845e08210f39","Type":"ContainerStarted","Data":"22e60b1932512a44ba8f46bd9b87a632af4092505ab163df9d3d3d8d6716e5cd"} Dec 01 05:05:27 crc kubenswrapper[4880]: I1201 05:05:27.292696 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnhfj" event={"ID":"9ac8a541-5958-43ab-bf5c-845e08210f39","Type":"ContainerStarted","Data":"0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1"} Dec 01 05:05:28 crc kubenswrapper[4880]: I1201 05:05:28.302805 4880 generic.go:334] "Generic (PLEG): container finished" podID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerID="0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1" exitCode=0 Dec 01 05:05:28 crc kubenswrapper[4880]: I1201 05:05:28.302861 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnhfj" event={"ID":"9ac8a541-5958-43ab-bf5c-845e08210f39","Type":"ContainerDied","Data":"0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1"} Dec 01 05:05:29 crc kubenswrapper[4880]: I1201 05:05:29.312407 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnhfj" event={"ID":"9ac8a541-5958-43ab-bf5c-845e08210f39","Type":"ContainerStarted","Data":"5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86"} Dec 01 05:05:29 crc kubenswrapper[4880]: I1201 05:05:29.344523 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bnhfj" podStartSLOduration=2.881663058 podStartE2EDuration="5.342993669s" podCreationTimestamp="2025-12-01 05:05:24 +0000 UTC" firstStartedPulling="2025-12-01 05:05:26.28213581 +0000 UTC m=+7755.793390182" lastFinishedPulling="2025-12-01 05:05:28.743466411 +0000 UTC m=+7758.254720793" observedRunningTime="2025-12-01 05:05:29.328998826 +0000 UTC m=+7758.840253208" watchObservedRunningTime="2025-12-01 05:05:29.342993669 +0000 UTC m=+7758.854248041" Dec 01 05:05:29 crc kubenswrapper[4880]: I1201 05:05:29.784192 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:05:29 crc kubenswrapper[4880]: I1201 05:05:29.841447 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:05:31 crc kubenswrapper[4880]: I1201 05:05:31.511187 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhr6w"] Dec 01 05:05:31 crc kubenswrapper[4880]: I1201 05:05:31.512011 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zhr6w" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="registry-server" containerID="cri-o://8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc" gracePeriod=2 Dec 01 05:05:31 crc kubenswrapper[4880]: I1201 05:05:31.784299 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:05:31 crc kubenswrapper[4880]: E1201 05:05:31.784508 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.092532 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.218610 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr87k\" (UniqueName: \"kubernetes.io/projected/349042bf-7104-4424-abc8-8a180f0a0f5f-kube-api-access-gr87k\") pod \"349042bf-7104-4424-abc8-8a180f0a0f5f\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.218688 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-catalog-content\") pod \"349042bf-7104-4424-abc8-8a180f0a0f5f\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.218740 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-utilities\") pod \"349042bf-7104-4424-abc8-8a180f0a0f5f\" (UID: \"349042bf-7104-4424-abc8-8a180f0a0f5f\") " Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.219832 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-utilities" (OuterVolumeSpecName: "utilities") pod "349042bf-7104-4424-abc8-8a180f0a0f5f" (UID: "349042bf-7104-4424-abc8-8a180f0a0f5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.229952 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349042bf-7104-4424-abc8-8a180f0a0f5f-kube-api-access-gr87k" (OuterVolumeSpecName: "kube-api-access-gr87k") pod "349042bf-7104-4424-abc8-8a180f0a0f5f" (UID: "349042bf-7104-4424-abc8-8a180f0a0f5f"). InnerVolumeSpecName "kube-api-access-gr87k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.320642 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr87k\" (UniqueName: \"kubernetes.io/projected/349042bf-7104-4424-abc8-8a180f0a0f5f-kube-api-access-gr87k\") on node \"crc\" DevicePath \"\"" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.320682 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.326338 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "349042bf-7104-4424-abc8-8a180f0a0f5f" (UID: "349042bf-7104-4424-abc8-8a180f0a0f5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.337767 4880 generic.go:334] "Generic (PLEG): container finished" podID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerID="8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc" exitCode=0 Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.337816 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhr6w" event={"ID":"349042bf-7104-4424-abc8-8a180f0a0f5f","Type":"ContainerDied","Data":"8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc"} Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.337821 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhr6w" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.337857 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhr6w" event={"ID":"349042bf-7104-4424-abc8-8a180f0a0f5f","Type":"ContainerDied","Data":"ee5a63bd141b3ca34ae316081edc0d3d490349e5cf4472c8a68b8ffb1bfcf7aa"} Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.337916 4880 scope.go:117] "RemoveContainer" containerID="8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.357493 4880 scope.go:117] "RemoveContainer" containerID="aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.388655 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhr6w"] Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.402583 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zhr6w"] Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.408184 4880 scope.go:117] "RemoveContainer" containerID="7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.424733 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349042bf-7104-4424-abc8-8a180f0a0f5f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.452374 4880 scope.go:117] "RemoveContainer" containerID="8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc" Dec 01 05:05:32 crc kubenswrapper[4880]: E1201 05:05:32.457594 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc\": container with ID starting with 8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc not found: ID does not exist" containerID="8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.457797 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc"} err="failed to get container status \"8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc\": rpc error: code = NotFound desc = could not find container \"8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc\": container with ID starting with 8caf77ef034d93390d56ac4802bd3bbd0eac215e5c3b7812833b646843ee41fc not found: ID does not exist" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.457833 4880 scope.go:117] "RemoveContainer" containerID="aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d" Dec 01 05:05:32 crc kubenswrapper[4880]: E1201 05:05:32.458455 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d\": container with ID starting with aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d not found: ID does not exist" containerID="aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.458508 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d"} err="failed to get container status \"aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d\": rpc error: code = NotFound desc = could not find container \"aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d\": container with ID starting with aa5101e3a7d703b744be8bc10f29848cb16a7300a7cfbc081d5d5fd212f46d2d not found: ID does not exist" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.458535 4880 scope.go:117] "RemoveContainer" containerID="7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b" Dec 01 05:05:32 crc kubenswrapper[4880]: E1201 05:05:32.458916 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b\": container with ID starting with 7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b not found: ID does not exist" containerID="7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.458943 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b"} err="failed to get container status \"7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b\": rpc error: code = NotFound desc = could not find container \"7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b\": container with ID starting with 7ee9e5cab089c639165c68f727749340dacb2317e052cdc1802c7839ded9dd8b not found: ID does not exist" Dec 01 05:05:32 crc kubenswrapper[4880]: I1201 05:05:32.799214 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" path="/var/lib/kubelet/pods/349042bf-7104-4424-abc8-8a180f0a0f5f/volumes" Dec 01 05:05:35 crc kubenswrapper[4880]: I1201 05:05:35.295542 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:35 crc kubenswrapper[4880]: I1201 05:05:35.297801 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:35 crc kubenswrapper[4880]: I1201 05:05:35.370049 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:35 crc kubenswrapper[4880]: I1201 05:05:35.458117 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:36 crc kubenswrapper[4880]: I1201 05:05:36.516648 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnhfj"] Dec 01 05:05:37 crc kubenswrapper[4880]: I1201 05:05:37.434717 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bnhfj" podUID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerName="registry-server" containerID="cri-o://5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86" gracePeriod=2 Dec 01 05:05:37 crc kubenswrapper[4880]: I1201 05:05:37.948092 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.142703 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-catalog-content\") pod \"9ac8a541-5958-43ab-bf5c-845e08210f39\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.142925 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4x6s\" (UniqueName: \"kubernetes.io/projected/9ac8a541-5958-43ab-bf5c-845e08210f39-kube-api-access-p4x6s\") pod \"9ac8a541-5958-43ab-bf5c-845e08210f39\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.143098 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-utilities\") pod \"9ac8a541-5958-43ab-bf5c-845e08210f39\" (UID: \"9ac8a541-5958-43ab-bf5c-845e08210f39\") " Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.143665 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-utilities" (OuterVolumeSpecName: "utilities") pod "9ac8a541-5958-43ab-bf5c-845e08210f39" (UID: "9ac8a541-5958-43ab-bf5c-845e08210f39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.149460 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac8a541-5958-43ab-bf5c-845e08210f39-kube-api-access-p4x6s" (OuterVolumeSpecName: "kube-api-access-p4x6s") pod "9ac8a541-5958-43ab-bf5c-845e08210f39" (UID: "9ac8a541-5958-43ab-bf5c-845e08210f39"). InnerVolumeSpecName "kube-api-access-p4x6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.206817 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ac8a541-5958-43ab-bf5c-845e08210f39" (UID: "9ac8a541-5958-43ab-bf5c-845e08210f39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.245083 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.245119 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4x6s\" (UniqueName: \"kubernetes.io/projected/9ac8a541-5958-43ab-bf5c-845e08210f39-kube-api-access-p4x6s\") on node \"crc\" DevicePath \"\"" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.245129 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac8a541-5958-43ab-bf5c-845e08210f39-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.460500 4880 generic.go:334] "Generic (PLEG): container finished" podID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerID="5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86" exitCode=0 Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.460571 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnhfj" event={"ID":"9ac8a541-5958-43ab-bf5c-845e08210f39","Type":"ContainerDied","Data":"5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86"} Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.460601 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bnhfj" event={"ID":"9ac8a541-5958-43ab-bf5c-845e08210f39","Type":"ContainerDied","Data":"22e60b1932512a44ba8f46bd9b87a632af4092505ab163df9d3d3d8d6716e5cd"} Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.460621 4880 scope.go:117] "RemoveContainer" containerID="5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.461034 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bnhfj" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.482399 4880 scope.go:117] "RemoveContainer" containerID="0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.516995 4880 scope.go:117] "RemoveContainer" containerID="f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.543700 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bnhfj"] Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.555592 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bnhfj"] Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.564831 4880 scope.go:117] "RemoveContainer" containerID="5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86" Dec 01 05:05:38 crc kubenswrapper[4880]: E1201 05:05:38.565295 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86\": container with ID starting with 5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86 not found: ID does not exist" containerID="5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.565340 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86"} err="failed to get container status \"5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86\": rpc error: code = NotFound desc = could not find container \"5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86\": container with ID starting with 5ba637e8e54765bc3f515b96abc320c188481a5113538aa76e86f731416c3b86 not found: ID does not exist" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.565368 4880 scope.go:117] "RemoveContainer" containerID="0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1" Dec 01 05:05:38 crc kubenswrapper[4880]: E1201 05:05:38.565816 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1\": container with ID starting with 0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1 not found: ID does not exist" containerID="0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.565848 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1"} err="failed to get container status \"0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1\": rpc error: code = NotFound desc = could not find container \"0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1\": container with ID starting with 0538719d03940d116ba453301a7e164c911eb861f54b91a02529ab2d0f5848c1 not found: ID does not exist" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.565894 4880 scope.go:117] "RemoveContainer" containerID="f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6" Dec 01 05:05:38 crc kubenswrapper[4880]: E1201 05:05:38.566144 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6\": container with ID starting with f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6 not found: ID does not exist" containerID="f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.566173 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6"} err="failed to get container status \"f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6\": rpc error: code = NotFound desc = could not find container \"f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6\": container with ID starting with f2ae143b92a511c91dc9b7db7ff75cc32d652ac31e1db472828cc6903ccae1a6 not found: ID does not exist" Dec 01 05:05:38 crc kubenswrapper[4880]: I1201 05:05:38.799511 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac8a541-5958-43ab-bf5c-845e08210f39" path="/var/lib/kubelet/pods/9ac8a541-5958-43ab-bf5c-845e08210f39/volumes" Dec 01 05:05:44 crc kubenswrapper[4880]: I1201 05:05:44.784835 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:05:44 crc kubenswrapper[4880]: E1201 05:05:44.785624 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:05:59 crc kubenswrapper[4880]: I1201 05:05:59.783964 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:05:59 crc kubenswrapper[4880]: E1201 05:05:59.784843 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:06:12 crc kubenswrapper[4880]: I1201 05:06:12.784183 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:06:12 crc kubenswrapper[4880]: E1201 05:06:12.785522 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:06:24 crc kubenswrapper[4880]: I1201 05:06:24.784344 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:06:24 crc kubenswrapper[4880]: E1201 05:06:24.785203 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:06:38 crc kubenswrapper[4880]: I1201 05:06:38.784706 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:06:38 crc kubenswrapper[4880]: E1201 05:06:38.785821 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:06:51 crc kubenswrapper[4880]: I1201 05:06:51.785385 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:06:51 crc kubenswrapper[4880]: E1201 05:06:51.786293 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:07:04 crc kubenswrapper[4880]: I1201 05:07:04.784437 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:07:04 crc kubenswrapper[4880]: E1201 05:07:04.785241 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:07:17 crc kubenswrapper[4880]: I1201 05:07:17.785230 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:07:17 crc kubenswrapper[4880]: E1201 05:07:17.786063 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.508386 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gzc5k"] Dec 01 05:07:29 crc kubenswrapper[4880]: E1201 05:07:29.509398 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="registry-server" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.509415 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="registry-server" Dec 01 05:07:29 crc kubenswrapper[4880]: E1201 05:07:29.509430 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="extract-utilities" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.509439 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="extract-utilities" Dec 01 05:07:29 crc kubenswrapper[4880]: E1201 05:07:29.509462 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="extract-content" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.509473 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="extract-content" Dec 01 05:07:29 crc kubenswrapper[4880]: E1201 05:07:29.509502 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerName="extract-content" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.509510 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerName="extract-content" Dec 01 05:07:29 crc kubenswrapper[4880]: E1201 05:07:29.509524 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerName="registry-server" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.509532 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerName="registry-server" Dec 01 05:07:29 crc kubenswrapper[4880]: E1201 05:07:29.509554 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerName="extract-utilities" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.509562 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerName="extract-utilities" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.510748 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac8a541-5958-43ab-bf5c-845e08210f39" containerName="registry-server" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.510775 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="349042bf-7104-4424-abc8-8a180f0a0f5f" containerName="registry-server" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.512641 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.536801 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzc5k"] Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.617815 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-catalog-content\") pod \"certified-operators-gzc5k\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.617934 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxm5p\" (UniqueName: \"kubernetes.io/projected/ca1d2d64-a460-4778-9b64-4c68216328e5-kube-api-access-sxm5p\") pod \"certified-operators-gzc5k\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.618001 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-utilities\") pod \"certified-operators-gzc5k\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.719438 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxm5p\" (UniqueName: \"kubernetes.io/projected/ca1d2d64-a460-4778-9b64-4c68216328e5-kube-api-access-sxm5p\") pod \"certified-operators-gzc5k\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.720264 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-utilities\") pod \"certified-operators-gzc5k\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.720411 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-catalog-content\") pod \"certified-operators-gzc5k\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.720617 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-utilities\") pod \"certified-operators-gzc5k\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.720904 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-catalog-content\") pod \"certified-operators-gzc5k\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.740372 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxm5p\" (UniqueName: \"kubernetes.io/projected/ca1d2d64-a460-4778-9b64-4c68216328e5-kube-api-access-sxm5p\") pod \"certified-operators-gzc5k\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:29 crc kubenswrapper[4880]: I1201 05:07:29.844997 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:30 crc kubenswrapper[4880]: I1201 05:07:30.294764 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzc5k"] Dec 01 05:07:30 crc kubenswrapper[4880]: I1201 05:07:30.748609 4880 generic.go:334] "Generic (PLEG): container finished" podID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerID="e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e" exitCode=0 Dec 01 05:07:30 crc kubenswrapper[4880]: I1201 05:07:30.748865 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzc5k" event={"ID":"ca1d2d64-a460-4778-9b64-4c68216328e5","Type":"ContainerDied","Data":"e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e"} Dec 01 05:07:30 crc kubenswrapper[4880]: I1201 05:07:30.748932 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzc5k" event={"ID":"ca1d2d64-a460-4778-9b64-4c68216328e5","Type":"ContainerStarted","Data":"55a6d10aa0b90679e642ca7a546d2cdaf4cd9d782ab34bfc7a881979602c9100"} Dec 01 05:07:31 crc kubenswrapper[4880]: I1201 05:07:31.764121 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzc5k" event={"ID":"ca1d2d64-a460-4778-9b64-4c68216328e5","Type":"ContainerStarted","Data":"28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124"} Dec 01 05:07:32 crc kubenswrapper[4880]: I1201 05:07:32.776361 4880 generic.go:334] "Generic (PLEG): container finished" podID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerID="28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124" exitCode=0 Dec 01 05:07:32 crc kubenswrapper[4880]: I1201 05:07:32.776457 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzc5k" event={"ID":"ca1d2d64-a460-4778-9b64-4c68216328e5","Type":"ContainerDied","Data":"28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124"} Dec 01 05:07:32 crc kubenswrapper[4880]: I1201 05:07:32.784491 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:07:32 crc kubenswrapper[4880]: E1201 05:07:32.784749 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:07:33 crc kubenswrapper[4880]: I1201 05:07:33.788907 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzc5k" event={"ID":"ca1d2d64-a460-4778-9b64-4c68216328e5","Type":"ContainerStarted","Data":"65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9"} Dec 01 05:07:33 crc kubenswrapper[4880]: I1201 05:07:33.810312 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gzc5k" podStartSLOduration=2.218853618 podStartE2EDuration="4.810294268s" podCreationTimestamp="2025-12-01 05:07:29 +0000 UTC" firstStartedPulling="2025-12-01 05:07:30.750698801 +0000 UTC m=+7880.261953173" lastFinishedPulling="2025-12-01 05:07:33.342139451 +0000 UTC m=+7882.853393823" observedRunningTime="2025-12-01 05:07:33.805395408 +0000 UTC m=+7883.316649790" watchObservedRunningTime="2025-12-01 05:07:33.810294268 +0000 UTC m=+7883.321548660" Dec 01 05:07:39 crc kubenswrapper[4880]: I1201 05:07:39.845405 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:39 crc kubenswrapper[4880]: I1201 05:07:39.845894 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:39 crc kubenswrapper[4880]: I1201 05:07:39.912950 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:40 crc kubenswrapper[4880]: I1201 05:07:40.951209 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:41 crc kubenswrapper[4880]: I1201 05:07:41.023942 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzc5k"] Dec 01 05:07:42 crc kubenswrapper[4880]: I1201 05:07:42.881304 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gzc5k" podUID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerName="registry-server" containerID="cri-o://65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9" gracePeriod=2 Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.453838 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.547911 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-catalog-content\") pod \"ca1d2d64-a460-4778-9b64-4c68216328e5\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.547982 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxm5p\" (UniqueName: \"kubernetes.io/projected/ca1d2d64-a460-4778-9b64-4c68216328e5-kube-api-access-sxm5p\") pod \"ca1d2d64-a460-4778-9b64-4c68216328e5\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.548071 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-utilities\") pod \"ca1d2d64-a460-4778-9b64-4c68216328e5\" (UID: \"ca1d2d64-a460-4778-9b64-4c68216328e5\") " Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.549053 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-utilities" (OuterVolumeSpecName: "utilities") pod "ca1d2d64-a460-4778-9b64-4c68216328e5" (UID: "ca1d2d64-a460-4778-9b64-4c68216328e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.557227 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca1d2d64-a460-4778-9b64-4c68216328e5-kube-api-access-sxm5p" (OuterVolumeSpecName: "kube-api-access-sxm5p") pod "ca1d2d64-a460-4778-9b64-4c68216328e5" (UID: "ca1d2d64-a460-4778-9b64-4c68216328e5"). InnerVolumeSpecName "kube-api-access-sxm5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.599752 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca1d2d64-a460-4778-9b64-4c68216328e5" (UID: "ca1d2d64-a460-4778-9b64-4c68216328e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.650164 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.650222 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca1d2d64-a460-4778-9b64-4c68216328e5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.650234 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxm5p\" (UniqueName: \"kubernetes.io/projected/ca1d2d64-a460-4778-9b64-4c68216328e5-kube-api-access-sxm5p\") on node \"crc\" DevicePath \"\"" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.892825 4880 generic.go:334] "Generic (PLEG): container finished" podID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerID="65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9" exitCode=0 Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.892910 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzc5k" event={"ID":"ca1d2d64-a460-4778-9b64-4c68216328e5","Type":"ContainerDied","Data":"65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9"} Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.893011 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzc5k" event={"ID":"ca1d2d64-a460-4778-9b64-4c68216328e5","Type":"ContainerDied","Data":"55a6d10aa0b90679e642ca7a546d2cdaf4cd9d782ab34bfc7a881979602c9100"} Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.892935 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzc5k" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.893057 4880 scope.go:117] "RemoveContainer" containerID="65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.943304 4880 scope.go:117] "RemoveContainer" containerID="28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124" Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.960039 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzc5k"] Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.975479 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gzc5k"] Dec 01 05:07:43 crc kubenswrapper[4880]: I1201 05:07:43.989552 4880 scope.go:117] "RemoveContainer" containerID="e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e" Dec 01 05:07:44 crc kubenswrapper[4880]: I1201 05:07:44.024016 4880 scope.go:117] "RemoveContainer" containerID="65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9" Dec 01 05:07:44 crc kubenswrapper[4880]: E1201 05:07:44.024763 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9\": container with ID starting with 65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9 not found: ID does not exist" containerID="65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9" Dec 01 05:07:44 crc kubenswrapper[4880]: I1201 05:07:44.024824 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9"} err="failed to get container status \"65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9\": rpc error: code = NotFound desc = could not find container \"65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9\": container with ID starting with 65f0533ad183172c6ace38fd200ee740356427d62e0ae8a9d9bfbba1ca9ea5c9 not found: ID does not exist" Dec 01 05:07:44 crc kubenswrapper[4880]: I1201 05:07:44.024855 4880 scope.go:117] "RemoveContainer" containerID="28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124" Dec 01 05:07:44 crc kubenswrapper[4880]: E1201 05:07:44.025560 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124\": container with ID starting with 28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124 not found: ID does not exist" containerID="28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124" Dec 01 05:07:44 crc kubenswrapper[4880]: I1201 05:07:44.025610 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124"} err="failed to get container status \"28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124\": rpc error: code = NotFound desc = could not find container \"28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124\": container with ID starting with 28ac166abdebf9b969d513ecaef5acdaeeb60b1883e15bce9aa863f17cb6c124 not found: ID does not exist" Dec 01 05:07:44 crc kubenswrapper[4880]: I1201 05:07:44.025645 4880 scope.go:117] "RemoveContainer" containerID="e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e" Dec 01 05:07:44 crc kubenswrapper[4880]: E1201 05:07:44.026107 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e\": container with ID starting with e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e not found: ID does not exist" containerID="e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e" Dec 01 05:07:44 crc kubenswrapper[4880]: I1201 05:07:44.026136 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e"} err="failed to get container status \"e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e\": rpc error: code = NotFound desc = could not find container \"e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e\": container with ID starting with e6c89ede0f8419ccdcfcfe4ad35007095207e6c0dd6f792300b5ec74685a9c1e not found: ID does not exist" Dec 01 05:07:44 crc kubenswrapper[4880]: I1201 05:07:44.797757 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca1d2d64-a460-4778-9b64-4c68216328e5" path="/var/lib/kubelet/pods/ca1d2d64-a460-4778-9b64-4c68216328e5/volumes" Dec 01 05:07:47 crc kubenswrapper[4880]: I1201 05:07:47.785018 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:07:47 crc kubenswrapper[4880]: E1201 05:07:47.785926 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:07:58 crc kubenswrapper[4880]: I1201 05:07:58.784451 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:07:58 crc kubenswrapper[4880]: E1201 05:07:58.785953 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:08:13 crc kubenswrapper[4880]: I1201 05:08:13.784148 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:08:13 crc kubenswrapper[4880]: E1201 05:08:13.785206 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:08:27 crc kubenswrapper[4880]: I1201 05:08:27.785039 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:08:27 crc kubenswrapper[4880]: E1201 05:08:27.786096 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:08:40 crc kubenswrapper[4880]: I1201 05:08:40.798235 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:08:40 crc kubenswrapper[4880]: E1201 05:08:40.799358 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:08:51 crc kubenswrapper[4880]: I1201 05:08:51.784562 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:08:51 crc kubenswrapper[4880]: E1201 05:08:51.785255 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:09:02 crc kubenswrapper[4880]: I1201 05:09:02.784530 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:09:02 crc kubenswrapper[4880]: E1201 05:09:02.785405 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:09:14 crc kubenswrapper[4880]: I1201 05:09:14.787951 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:09:14 crc kubenswrapper[4880]: E1201 05:09:14.789243 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:09:26 crc kubenswrapper[4880]: I1201 05:09:26.785014 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:09:26 crc kubenswrapper[4880]: E1201 05:09:26.785581 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:09:41 crc kubenswrapper[4880]: I1201 05:09:41.784217 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:09:41 crc kubenswrapper[4880]: E1201 05:09:41.785277 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.119010 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r4r62"] Dec 01 05:09:43 crc kubenswrapper[4880]: E1201 05:09:43.121318 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerName="extract-utilities" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.121359 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerName="extract-utilities" Dec 01 05:09:43 crc kubenswrapper[4880]: E1201 05:09:43.121415 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerName="registry-server" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.121429 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerName="registry-server" Dec 01 05:09:43 crc kubenswrapper[4880]: E1201 05:09:43.121471 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerName="extract-content" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.121483 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerName="extract-content" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.121841 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1d2d64-a460-4778-9b64-4c68216328e5" containerName="registry-server" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.124789 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.137937 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4r62"] Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.163626 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-catalog-content\") pod \"redhat-marketplace-r4r62\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.163702 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjfv\" (UniqueName: \"kubernetes.io/projected/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-kube-api-access-4kjfv\") pod \"redhat-marketplace-r4r62\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.163821 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-utilities\") pod \"redhat-marketplace-r4r62\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.265657 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kjfv\" (UniqueName: \"kubernetes.io/projected/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-kube-api-access-4kjfv\") pod \"redhat-marketplace-r4r62\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.265814 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-utilities\") pod \"redhat-marketplace-r4r62\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.265932 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-catalog-content\") pod \"redhat-marketplace-r4r62\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.266456 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-catalog-content\") pod \"redhat-marketplace-r4r62\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.266649 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-utilities\") pod \"redhat-marketplace-r4r62\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.300035 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kjfv\" (UniqueName: \"kubernetes.io/projected/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-kube-api-access-4kjfv\") pod \"redhat-marketplace-r4r62\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.443312 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:43 crc kubenswrapper[4880]: I1201 05:09:43.990377 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4r62"] Dec 01 05:09:44 crc kubenswrapper[4880]: I1201 05:09:44.551887 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4r62" event={"ID":"e635579c-8d5c-43bf-b1d6-6b3ef2d41999","Type":"ContainerStarted","Data":"c5828dbfa0ce1e09320bcb2ddafb37f45712f55ebbe2f931d63a1009f68d409b"} Dec 01 05:09:45 crc kubenswrapper[4880]: I1201 05:09:45.561430 4880 generic.go:334] "Generic (PLEG): container finished" podID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerID="9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736" exitCode=0 Dec 01 05:09:45 crc kubenswrapper[4880]: I1201 05:09:45.561669 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4r62" event={"ID":"e635579c-8d5c-43bf-b1d6-6b3ef2d41999","Type":"ContainerDied","Data":"9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736"} Dec 01 05:09:47 crc kubenswrapper[4880]: I1201 05:09:47.583249 4880 generic.go:334] "Generic (PLEG): container finished" podID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerID="cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce" exitCode=0 Dec 01 05:09:47 crc kubenswrapper[4880]: I1201 05:09:47.583839 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4r62" event={"ID":"e635579c-8d5c-43bf-b1d6-6b3ef2d41999","Type":"ContainerDied","Data":"cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce"} Dec 01 05:09:48 crc kubenswrapper[4880]: I1201 05:09:48.610132 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4r62" event={"ID":"e635579c-8d5c-43bf-b1d6-6b3ef2d41999","Type":"ContainerStarted","Data":"01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58"} Dec 01 05:09:48 crc kubenswrapper[4880]: I1201 05:09:48.648334 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r4r62" podStartSLOduration=2.886208605 podStartE2EDuration="5.648309429s" podCreationTimestamp="2025-12-01 05:09:43 +0000 UTC" firstStartedPulling="2025-12-01 05:09:45.563254088 +0000 UTC m=+8015.074508460" lastFinishedPulling="2025-12-01 05:09:48.325354872 +0000 UTC m=+8017.836609284" observedRunningTime="2025-12-01 05:09:48.634043969 +0000 UTC m=+8018.145298341" watchObservedRunningTime="2025-12-01 05:09:48.648309429 +0000 UTC m=+8018.159563801" Dec 01 05:09:53 crc kubenswrapper[4880]: I1201 05:09:53.444325 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:53 crc kubenswrapper[4880]: I1201 05:09:53.445123 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:53 crc kubenswrapper[4880]: I1201 05:09:53.518151 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:53 crc kubenswrapper[4880]: I1201 05:09:53.719924 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:53 crc kubenswrapper[4880]: I1201 05:09:53.784595 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:09:53 crc kubenswrapper[4880]: E1201 05:09:53.784909 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:09:53 crc kubenswrapper[4880]: I1201 05:09:53.791403 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4r62"] Dec 01 05:09:55 crc kubenswrapper[4880]: I1201 05:09:55.685863 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r4r62" podUID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerName="registry-server" containerID="cri-o://01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58" gracePeriod=2 Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.193244 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.323849 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-utilities\") pod \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.323909 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-catalog-content\") pod \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.323972 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kjfv\" (UniqueName: \"kubernetes.io/projected/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-kube-api-access-4kjfv\") pod \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\" (UID: \"e635579c-8d5c-43bf-b1d6-6b3ef2d41999\") " Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.325070 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-utilities" (OuterVolumeSpecName: "utilities") pod "e635579c-8d5c-43bf-b1d6-6b3ef2d41999" (UID: "e635579c-8d5c-43bf-b1d6-6b3ef2d41999"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.330195 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-kube-api-access-4kjfv" (OuterVolumeSpecName: "kube-api-access-4kjfv") pod "e635579c-8d5c-43bf-b1d6-6b3ef2d41999" (UID: "e635579c-8d5c-43bf-b1d6-6b3ef2d41999"). InnerVolumeSpecName "kube-api-access-4kjfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.338836 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e635579c-8d5c-43bf-b1d6-6b3ef2d41999" (UID: "e635579c-8d5c-43bf-b1d6-6b3ef2d41999"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.426187 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.426212 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.426223 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kjfv\" (UniqueName: \"kubernetes.io/projected/e635579c-8d5c-43bf-b1d6-6b3ef2d41999-kube-api-access-4kjfv\") on node \"crc\" DevicePath \"\"" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.702175 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r4r62" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.702214 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4r62" event={"ID":"e635579c-8d5c-43bf-b1d6-6b3ef2d41999","Type":"ContainerDied","Data":"01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58"} Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.702294 4880 scope.go:117] "RemoveContainer" containerID="01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.711947 4880 generic.go:334] "Generic (PLEG): container finished" podID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerID="01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58" exitCode=0 Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.712029 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r4r62" event={"ID":"e635579c-8d5c-43bf-b1d6-6b3ef2d41999","Type":"ContainerDied","Data":"c5828dbfa0ce1e09320bcb2ddafb37f45712f55ebbe2f931d63a1009f68d409b"} Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.758459 4880 scope.go:117] "RemoveContainer" containerID="cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.776050 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4r62"] Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.813206 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r4r62"] Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.818693 4880 scope.go:117] "RemoveContainer" containerID="9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.846363 4880 scope.go:117] "RemoveContainer" containerID="01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58" Dec 01 05:09:56 crc kubenswrapper[4880]: E1201 05:09:56.846942 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58\": container with ID starting with 01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58 not found: ID does not exist" containerID="01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.846978 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58"} err="failed to get container status \"01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58\": rpc error: code = NotFound desc = could not find container \"01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58\": container with ID starting with 01e2ff06804684da02e396e1df0190da08cbc937fc8509414e83baec4a0dfb58 not found: ID does not exist" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.847010 4880 scope.go:117] "RemoveContainer" containerID="cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce" Dec 01 05:09:56 crc kubenswrapper[4880]: E1201 05:09:56.847246 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce\": container with ID starting with cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce not found: ID does not exist" containerID="cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.847276 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce"} err="failed to get container status \"cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce\": rpc error: code = NotFound desc = could not find container \"cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce\": container with ID starting with cdb80cf97bf7a80ec1d25dae6b6b0ae2224d24c70f0d90ac9b73904d293aadce not found: ID does not exist" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.847295 4880 scope.go:117] "RemoveContainer" containerID="9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736" Dec 01 05:09:56 crc kubenswrapper[4880]: E1201 05:09:56.847563 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736\": container with ID starting with 9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736 not found: ID does not exist" containerID="9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736" Dec 01 05:09:56 crc kubenswrapper[4880]: I1201 05:09:56.847595 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736"} err="failed to get container status \"9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736\": rpc error: code = NotFound desc = could not find container \"9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736\": container with ID starting with 9de213a725e4735507e51256a386f031d228b24e96c2fbdbce15a9e8f0e46736 not found: ID does not exist" Dec 01 05:09:58 crc kubenswrapper[4880]: I1201 05:09:58.801992 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" path="/var/lib/kubelet/pods/e635579c-8d5c-43bf-b1d6-6b3ef2d41999/volumes" Dec 01 05:10:05 crc kubenswrapper[4880]: I1201 05:10:05.784886 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:10:05 crc kubenswrapper[4880]: E1201 05:10:05.785662 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:10:20 crc kubenswrapper[4880]: I1201 05:10:20.793242 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:10:21 crc kubenswrapper[4880]: I1201 05:10:21.981645 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"5abede3936b65ebeb52de9d2d80ab376f80189c39953c59e000ef239d395e90b"} Dec 01 05:12:47 crc kubenswrapper[4880]: I1201 05:12:47.369196 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:12:47 crc kubenswrapper[4880]: I1201 05:12:47.369802 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:13:17 crc kubenswrapper[4880]: I1201 05:13:17.369127 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:13:17 crc kubenswrapper[4880]: I1201 05:13:17.369649 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:13:47 crc kubenswrapper[4880]: I1201 05:13:47.369158 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:13:47 crc kubenswrapper[4880]: I1201 05:13:47.369532 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:13:47 crc kubenswrapper[4880]: I1201 05:13:47.369578 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 05:13:47 crc kubenswrapper[4880]: I1201 05:13:47.370379 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5abede3936b65ebeb52de9d2d80ab376f80189c39953c59e000ef239d395e90b"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 05:13:47 crc kubenswrapper[4880]: I1201 05:13:47.370430 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://5abede3936b65ebeb52de9d2d80ab376f80189c39953c59e000ef239d395e90b" gracePeriod=600 Dec 01 05:13:48 crc kubenswrapper[4880]: I1201 05:13:48.094002 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="5abede3936b65ebeb52de9d2d80ab376f80189c39953c59e000ef239d395e90b" exitCode=0 Dec 01 05:13:48 crc kubenswrapper[4880]: I1201 05:13:48.094104 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"5abede3936b65ebeb52de9d2d80ab376f80189c39953c59e000ef239d395e90b"} Dec 01 05:13:48 crc kubenswrapper[4880]: I1201 05:13:48.094443 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2"} Dec 01 05:13:48 crc kubenswrapper[4880]: I1201 05:13:48.094468 4880 scope.go:117] "RemoveContainer" containerID="2c028afdf7b21b063e1eef22988e3138619a88e5e116ff7a4ecdfb21e306e7b3" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.233825 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m"] Dec 01 05:15:00 crc kubenswrapper[4880]: E1201 05:15:00.234994 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerName="extract-utilities" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.235014 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerName="extract-utilities" Dec 01 05:15:00 crc kubenswrapper[4880]: E1201 05:15:00.235047 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerName="extract-content" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.235056 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerName="extract-content" Dec 01 05:15:00 crc kubenswrapper[4880]: E1201 05:15:00.235084 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerName="registry-server" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.235093 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerName="registry-server" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.235332 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="e635579c-8d5c-43bf-b1d6-6b3ef2d41999" containerName="registry-server" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.236238 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.248742 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.249002 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.270849 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m"] Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.278593 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3633518c-557a-48f8-ab98-c40a2dc52b4c-secret-volume\") pod \"collect-profiles-29409435-qkq8m\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.278656 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdrk\" (UniqueName: \"kubernetes.io/projected/3633518c-557a-48f8-ab98-c40a2dc52b4c-kube-api-access-vwdrk\") pod \"collect-profiles-29409435-qkq8m\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.278794 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3633518c-557a-48f8-ab98-c40a2dc52b4c-config-volume\") pod \"collect-profiles-29409435-qkq8m\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.380096 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3633518c-557a-48f8-ab98-c40a2dc52b4c-secret-volume\") pod \"collect-profiles-29409435-qkq8m\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.380195 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdrk\" (UniqueName: \"kubernetes.io/projected/3633518c-557a-48f8-ab98-c40a2dc52b4c-kube-api-access-vwdrk\") pod \"collect-profiles-29409435-qkq8m\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.380469 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3633518c-557a-48f8-ab98-c40a2dc52b4c-config-volume\") pod \"collect-profiles-29409435-qkq8m\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.381759 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3633518c-557a-48f8-ab98-c40a2dc52b4c-config-volume\") pod \"collect-profiles-29409435-qkq8m\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.389595 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3633518c-557a-48f8-ab98-c40a2dc52b4c-secret-volume\") pod \"collect-profiles-29409435-qkq8m\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.398114 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdrk\" (UniqueName: \"kubernetes.io/projected/3633518c-557a-48f8-ab98-c40a2dc52b4c-kube-api-access-vwdrk\") pod \"collect-profiles-29409435-qkq8m\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:00 crc kubenswrapper[4880]: I1201 05:15:00.559142 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:01 crc kubenswrapper[4880]: I1201 05:15:01.342451 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m"] Dec 01 05:15:01 crc kubenswrapper[4880]: W1201 05:15:01.360680 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3633518c_557a_48f8_ab98_c40a2dc52b4c.slice/crio-efeeafca58c1d356423f29f75afd9e7579313fc80aba18799c2245b25d66fcb2 WatchSource:0}: Error finding container efeeafca58c1d356423f29f75afd9e7579313fc80aba18799c2245b25d66fcb2: Status 404 returned error can't find the container with id efeeafca58c1d356423f29f75afd9e7579313fc80aba18799c2245b25d66fcb2 Dec 01 05:15:01 crc kubenswrapper[4880]: I1201 05:15:01.844938 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" event={"ID":"3633518c-557a-48f8-ab98-c40a2dc52b4c","Type":"ContainerStarted","Data":"91bf5ebeabe02f1b4c8993631223bcc0bf87f5316da4d2635e2fb23a4ee8f9df"} Dec 01 05:15:01 crc kubenswrapper[4880]: I1201 05:15:01.845207 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" event={"ID":"3633518c-557a-48f8-ab98-c40a2dc52b4c","Type":"ContainerStarted","Data":"efeeafca58c1d356423f29f75afd9e7579313fc80aba18799c2245b25d66fcb2"} Dec 01 05:15:01 crc kubenswrapper[4880]: I1201 05:15:01.860723 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" podStartSLOduration=1.860489395 podStartE2EDuration="1.860489395s" podCreationTimestamp="2025-12-01 05:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 05:15:01.858396374 +0000 UTC m=+8331.369650746" watchObservedRunningTime="2025-12-01 05:15:01.860489395 +0000 UTC m=+8331.371743767" Dec 01 05:15:02 crc kubenswrapper[4880]: I1201 05:15:02.859707 4880 generic.go:334] "Generic (PLEG): container finished" podID="3633518c-557a-48f8-ab98-c40a2dc52b4c" containerID="91bf5ebeabe02f1b4c8993631223bcc0bf87f5316da4d2635e2fb23a4ee8f9df" exitCode=0 Dec 01 05:15:02 crc kubenswrapper[4880]: I1201 05:15:02.859773 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" event={"ID":"3633518c-557a-48f8-ab98-c40a2dc52b4c","Type":"ContainerDied","Data":"91bf5ebeabe02f1b4c8993631223bcc0bf87f5316da4d2635e2fb23a4ee8f9df"} Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.353920 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.462900 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3633518c-557a-48f8-ab98-c40a2dc52b4c-secret-volume\") pod \"3633518c-557a-48f8-ab98-c40a2dc52b4c\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.463124 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3633518c-557a-48f8-ab98-c40a2dc52b4c-config-volume\") pod \"3633518c-557a-48f8-ab98-c40a2dc52b4c\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.463191 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwdrk\" (UniqueName: \"kubernetes.io/projected/3633518c-557a-48f8-ab98-c40a2dc52b4c-kube-api-access-vwdrk\") pod \"3633518c-557a-48f8-ab98-c40a2dc52b4c\" (UID: \"3633518c-557a-48f8-ab98-c40a2dc52b4c\") " Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.464108 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3633518c-557a-48f8-ab98-c40a2dc52b4c-config-volume" (OuterVolumeSpecName: "config-volume") pod "3633518c-557a-48f8-ab98-c40a2dc52b4c" (UID: "3633518c-557a-48f8-ab98-c40a2dc52b4c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.472201 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3633518c-557a-48f8-ab98-c40a2dc52b4c-kube-api-access-vwdrk" (OuterVolumeSpecName: "kube-api-access-vwdrk") pod "3633518c-557a-48f8-ab98-c40a2dc52b4c" (UID: "3633518c-557a-48f8-ab98-c40a2dc52b4c"). InnerVolumeSpecName "kube-api-access-vwdrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.472248 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3633518c-557a-48f8-ab98-c40a2dc52b4c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3633518c-557a-48f8-ab98-c40a2dc52b4c" (UID: "3633518c-557a-48f8-ab98-c40a2dc52b4c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.566125 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3633518c-557a-48f8-ab98-c40a2dc52b4c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.566196 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwdrk\" (UniqueName: \"kubernetes.io/projected/3633518c-557a-48f8-ab98-c40a2dc52b4c-kube-api-access-vwdrk\") on node \"crc\" DevicePath \"\"" Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.566227 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3633518c-557a-48f8-ab98-c40a2dc52b4c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.888145 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" event={"ID":"3633518c-557a-48f8-ab98-c40a2dc52b4c","Type":"ContainerDied","Data":"efeeafca58c1d356423f29f75afd9e7579313fc80aba18799c2245b25d66fcb2"} Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.888206 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efeeafca58c1d356423f29f75afd9e7579313fc80aba18799c2245b25d66fcb2" Dec 01 05:15:04 crc kubenswrapper[4880]: I1201 05:15:04.888225 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409435-qkq8m" Dec 01 05:15:05 crc kubenswrapper[4880]: I1201 05:15:05.440233 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w"] Dec 01 05:15:05 crc kubenswrapper[4880]: I1201 05:15:05.453980 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409390-4694w"] Dec 01 05:15:06 crc kubenswrapper[4880]: I1201 05:15:06.802651 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15940db-2356-4f35-80cb-7ac782831281" path="/var/lib/kubelet/pods/d15940db-2356-4f35-80cb-7ac782831281/volumes" Dec 01 05:15:45 crc kubenswrapper[4880]: I1201 05:15:45.697831 4880 scope.go:117] "RemoveContainer" containerID="e23c5b1249c3b6b57dcb4f0a39b1f63e910b95e853cf5fa7531fb2f519e38790" Dec 01 05:15:47 crc kubenswrapper[4880]: I1201 05:15:47.368571 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:15:47 crc kubenswrapper[4880]: I1201 05:15:47.368936 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.184799 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mgr5j"] Dec 01 05:15:59 crc kubenswrapper[4880]: E1201 05:15:59.186203 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3633518c-557a-48f8-ab98-c40a2dc52b4c" containerName="collect-profiles" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.186224 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="3633518c-557a-48f8-ab98-c40a2dc52b4c" containerName="collect-profiles" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.186490 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="3633518c-557a-48f8-ab98-c40a2dc52b4c" containerName="collect-profiles" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.188776 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.194369 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mgr5j"] Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.263644 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-utilities\") pod \"community-operators-mgr5j\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.263687 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dn6l\" (UniqueName: \"kubernetes.io/projected/d02d1f42-aef5-489f-a827-c7511e25c02e-kube-api-access-8dn6l\") pod \"community-operators-mgr5j\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.263751 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-catalog-content\") pod \"community-operators-mgr5j\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.365439 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-catalog-content\") pod \"community-operators-mgr5j\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.366207 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-utilities\") pod \"community-operators-mgr5j\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.366331 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dn6l\" (UniqueName: \"kubernetes.io/projected/d02d1f42-aef5-489f-a827-c7511e25c02e-kube-api-access-8dn6l\") pod \"community-operators-mgr5j\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.368644 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-catalog-content\") pod \"community-operators-mgr5j\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.368774 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-utilities\") pod \"community-operators-mgr5j\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.386894 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dn6l\" (UniqueName: \"kubernetes.io/projected/d02d1f42-aef5-489f-a827-c7511e25c02e-kube-api-access-8dn6l\") pod \"community-operators-mgr5j\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:15:59 crc kubenswrapper[4880]: I1201 05:15:59.550895 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:16:00 crc kubenswrapper[4880]: I1201 05:16:00.187504 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mgr5j"] Dec 01 05:16:00 crc kubenswrapper[4880]: I1201 05:16:00.482744 4880 generic.go:334] "Generic (PLEG): container finished" podID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerID="aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff" exitCode=0 Dec 01 05:16:00 crc kubenswrapper[4880]: I1201 05:16:00.482837 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgr5j" event={"ID":"d02d1f42-aef5-489f-a827-c7511e25c02e","Type":"ContainerDied","Data":"aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff"} Dec 01 05:16:00 crc kubenswrapper[4880]: I1201 05:16:00.483076 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgr5j" event={"ID":"d02d1f42-aef5-489f-a827-c7511e25c02e","Type":"ContainerStarted","Data":"e5630a3dd895347ca6d24e98bc697205d0ac529068791339523627a2f46b30c9"} Dec 01 05:16:00 crc kubenswrapper[4880]: I1201 05:16:00.485342 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 05:16:02 crc kubenswrapper[4880]: I1201 05:16:02.511907 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgr5j" event={"ID":"d02d1f42-aef5-489f-a827-c7511e25c02e","Type":"ContainerStarted","Data":"71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554"} Dec 01 05:16:03 crc kubenswrapper[4880]: I1201 05:16:03.524705 4880 generic.go:334] "Generic (PLEG): container finished" podID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerID="71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554" exitCode=0 Dec 01 05:16:03 crc kubenswrapper[4880]: I1201 05:16:03.524762 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgr5j" event={"ID":"d02d1f42-aef5-489f-a827-c7511e25c02e","Type":"ContainerDied","Data":"71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554"} Dec 01 05:16:04 crc kubenswrapper[4880]: I1201 05:16:04.537250 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgr5j" event={"ID":"d02d1f42-aef5-489f-a827-c7511e25c02e","Type":"ContainerStarted","Data":"9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8"} Dec 01 05:16:04 crc kubenswrapper[4880]: I1201 05:16:04.561751 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mgr5j" podStartSLOduration=2.082915141 podStartE2EDuration="5.561725076s" podCreationTimestamp="2025-12-01 05:15:59 +0000 UTC" firstStartedPulling="2025-12-01 05:16:00.484683535 +0000 UTC m=+8389.995937907" lastFinishedPulling="2025-12-01 05:16:03.96349345 +0000 UTC m=+8393.474747842" observedRunningTime="2025-12-01 05:16:04.557136523 +0000 UTC m=+8394.068390915" watchObservedRunningTime="2025-12-01 05:16:04.561725076 +0000 UTC m=+8394.072979468" Dec 01 05:16:09 crc kubenswrapper[4880]: I1201 05:16:09.551769 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:16:09 crc kubenswrapper[4880]: I1201 05:16:09.552356 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:16:09 crc kubenswrapper[4880]: I1201 05:16:09.599101 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:16:09 crc kubenswrapper[4880]: I1201 05:16:09.676416 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:16:09 crc kubenswrapper[4880]: I1201 05:16:09.849293 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mgr5j"] Dec 01 05:16:11 crc kubenswrapper[4880]: I1201 05:16:11.604736 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mgr5j" podUID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerName="registry-server" containerID="cri-o://9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8" gracePeriod=2 Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.138585 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.142462 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dn6l\" (UniqueName: \"kubernetes.io/projected/d02d1f42-aef5-489f-a827-c7511e25c02e-kube-api-access-8dn6l\") pod \"d02d1f42-aef5-489f-a827-c7511e25c02e\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.147255 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02d1f42-aef5-489f-a827-c7511e25c02e-kube-api-access-8dn6l" (OuterVolumeSpecName: "kube-api-access-8dn6l") pod "d02d1f42-aef5-489f-a827-c7511e25c02e" (UID: "d02d1f42-aef5-489f-a827-c7511e25c02e"). InnerVolumeSpecName "kube-api-access-8dn6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.243853 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-utilities\") pod \"d02d1f42-aef5-489f-a827-c7511e25c02e\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.243928 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-catalog-content\") pod \"d02d1f42-aef5-489f-a827-c7511e25c02e\" (UID: \"d02d1f42-aef5-489f-a827-c7511e25c02e\") " Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.244486 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dn6l\" (UniqueName: \"kubernetes.io/projected/d02d1f42-aef5-489f-a827-c7511e25c02e-kube-api-access-8dn6l\") on node \"crc\" DevicePath \"\"" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.245937 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-utilities" (OuterVolumeSpecName: "utilities") pod "d02d1f42-aef5-489f-a827-c7511e25c02e" (UID: "d02d1f42-aef5-489f-a827-c7511e25c02e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.304046 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d02d1f42-aef5-489f-a827-c7511e25c02e" (UID: "d02d1f42-aef5-489f-a827-c7511e25c02e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.345792 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.345823 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d02d1f42-aef5-489f-a827-c7511e25c02e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.613816 4880 generic.go:334] "Generic (PLEG): container finished" podID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerID="9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8" exitCode=0 Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.613858 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgr5j" event={"ID":"d02d1f42-aef5-489f-a827-c7511e25c02e","Type":"ContainerDied","Data":"9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8"} Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.613898 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgr5j" event={"ID":"d02d1f42-aef5-489f-a827-c7511e25c02e","Type":"ContainerDied","Data":"e5630a3dd895347ca6d24e98bc697205d0ac529068791339523627a2f46b30c9"} Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.613915 4880 scope.go:117] "RemoveContainer" containerID="9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.613951 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mgr5j" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.666268 4880 scope.go:117] "RemoveContainer" containerID="71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.671142 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mgr5j"] Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.682481 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mgr5j"] Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.685600 4880 scope.go:117] "RemoveContainer" containerID="aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.760697 4880 scope.go:117] "RemoveContainer" containerID="9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8" Dec 01 05:16:12 crc kubenswrapper[4880]: E1201 05:16:12.761656 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8\": container with ID starting with 9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8 not found: ID does not exist" containerID="9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.761848 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8"} err="failed to get container status \"9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8\": rpc error: code = NotFound desc = could not find container \"9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8\": container with ID starting with 9adb38a2b18d27a6c347d57df77638e7eccee7d5840b324fb4b55c67f2df1aa8 not found: ID does not exist" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.762071 4880 scope.go:117] "RemoveContainer" containerID="71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554" Dec 01 05:16:12 crc kubenswrapper[4880]: E1201 05:16:12.762628 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554\": container with ID starting with 71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554 not found: ID does not exist" containerID="71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.762806 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554"} err="failed to get container status \"71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554\": rpc error: code = NotFound desc = could not find container \"71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554\": container with ID starting with 71b0e2189f3399a8643931abe44e33b9c04d1f84b7657224820d400d2c994554 not found: ID does not exist" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.762951 4880 scope.go:117] "RemoveContainer" containerID="aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff" Dec 01 05:16:12 crc kubenswrapper[4880]: E1201 05:16:12.763413 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff\": container with ID starting with aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff not found: ID does not exist" containerID="aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.763672 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff"} err="failed to get container status \"aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff\": rpc error: code = NotFound desc = could not find container \"aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff\": container with ID starting with aafe474d4b7b7f9aad23df7f4561a2cd33d4c1fbb0748dfffc12599d9a750aff not found: ID does not exist" Dec 01 05:16:12 crc kubenswrapper[4880]: I1201 05:16:12.798675 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02d1f42-aef5-489f-a827-c7511e25c02e" path="/var/lib/kubelet/pods/d02d1f42-aef5-489f-a827-c7511e25c02e/volumes" Dec 01 05:16:17 crc kubenswrapper[4880]: I1201 05:16:17.368783 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:16:17 crc kubenswrapper[4880]: I1201 05:16:17.370229 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:16:47 crc kubenswrapper[4880]: I1201 05:16:47.368738 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:16:47 crc kubenswrapper[4880]: I1201 05:16:47.369430 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:16:47 crc kubenswrapper[4880]: I1201 05:16:47.369478 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 05:16:47 crc kubenswrapper[4880]: I1201 05:16:47.370354 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 05:16:47 crc kubenswrapper[4880]: I1201 05:16:47.370407 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" gracePeriod=600 Dec 01 05:16:47 crc kubenswrapper[4880]: E1201 05:16:47.489206 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:16:48 crc kubenswrapper[4880]: I1201 05:16:48.022353 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" exitCode=0 Dec 01 05:16:48 crc kubenswrapper[4880]: I1201 05:16:48.022399 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2"} Dec 01 05:16:48 crc kubenswrapper[4880]: I1201 05:16:48.022434 4880 scope.go:117] "RemoveContainer" containerID="5abede3936b65ebeb52de9d2d80ab376f80189c39953c59e000ef239d395e90b" Dec 01 05:16:48 crc kubenswrapper[4880]: I1201 05:16:48.023061 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:16:48 crc kubenswrapper[4880]: E1201 05:16:48.023383 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:17:01 crc kubenswrapper[4880]: I1201 05:17:01.784558 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:17:01 crc kubenswrapper[4880]: E1201 05:17:01.785819 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:17:14 crc kubenswrapper[4880]: I1201 05:17:14.784551 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:17:14 crc kubenswrapper[4880]: E1201 05:17:14.785814 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:17:27 crc kubenswrapper[4880]: I1201 05:17:27.784995 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:17:27 crc kubenswrapper[4880]: E1201 05:17:27.785780 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:17:39 crc kubenswrapper[4880]: I1201 05:17:39.784216 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:17:39 crc kubenswrapper[4880]: E1201 05:17:39.784804 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:17:52 crc kubenswrapper[4880]: I1201 05:17:52.784801 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:17:52 crc kubenswrapper[4880]: E1201 05:17:52.785691 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:18:03 crc kubenswrapper[4880]: I1201 05:18:03.783845 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:18:03 crc kubenswrapper[4880]: E1201 05:18:03.784923 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:18:17 crc kubenswrapper[4880]: I1201 05:18:17.786011 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:18:17 crc kubenswrapper[4880]: E1201 05:18:17.789591 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:18:29 crc kubenswrapper[4880]: I1201 05:18:29.785160 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:18:29 crc kubenswrapper[4880]: E1201 05:18:29.789074 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:18:42 crc kubenswrapper[4880]: I1201 05:18:42.784396 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:18:42 crc kubenswrapper[4880]: E1201 05:18:42.785223 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:18:54 crc kubenswrapper[4880]: I1201 05:18:54.785493 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:18:54 crc kubenswrapper[4880]: E1201 05:18:54.786336 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:19:07 crc kubenswrapper[4880]: I1201 05:19:07.784183 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:19:07 crc kubenswrapper[4880]: E1201 05:19:07.785474 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:19:14 crc kubenswrapper[4880]: E1201 05:19:14.903577 4880 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.39:57904->38.102.83.39:42095: read tcp 38.102.83.39:57904->38.102.83.39:42095: read: connection reset by peer Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.059927 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f7896444c-h22fk"] Dec 01 05:19:18 crc kubenswrapper[4880]: E1201 05:19:18.060675 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerName="extract-utilities" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.060692 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerName="extract-utilities" Dec 01 05:19:18 crc kubenswrapper[4880]: E1201 05:19:18.060713 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerName="extract-content" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.060721 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerName="extract-content" Dec 01 05:19:18 crc kubenswrapper[4880]: E1201 05:19:18.060738 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerName="registry-server" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.060746 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerName="registry-server" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.061018 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02d1f42-aef5-489f-a827-c7511e25c02e" containerName="registry-server" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.062229 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.089950 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f7896444c-h22fk"] Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.138689 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-internal-tls-certs\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.138785 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-combined-ca-bundle\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.138864 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-public-tls-certs\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.138993 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-httpd-config\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.139387 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-ovndb-tls-certs\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.139562 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkb6l\" (UniqueName: \"kubernetes.io/projected/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-kube-api-access-mkb6l\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.139696 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-config\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.241949 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-ovndb-tls-certs\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.242016 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkb6l\" (UniqueName: \"kubernetes.io/projected/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-kube-api-access-mkb6l\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.242058 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-config\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.242084 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-internal-tls-certs\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.242106 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-combined-ca-bundle\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.242133 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-public-tls-certs\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.242164 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-httpd-config\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.253359 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-ovndb-tls-certs\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.255790 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-public-tls-certs\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.255863 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-httpd-config\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.255973 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-combined-ca-bundle\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.256494 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-internal-tls-certs\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.260429 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-config\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.260506 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkb6l\" (UniqueName: \"kubernetes.io/projected/a9ca22ff-e70c-4883-b86a-d2f7c3a75d91-kube-api-access-mkb6l\") pod \"neutron-5f7896444c-h22fk\" (UID: \"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91\") " pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.429586 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:18 crc kubenswrapper[4880]: I1201 05:19:18.784634 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:19:18 crc kubenswrapper[4880]: E1201 05:19:18.785411 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:19:19 crc kubenswrapper[4880]: I1201 05:19:19.094442 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f7896444c-h22fk"] Dec 01 05:19:19 crc kubenswrapper[4880]: I1201 05:19:19.668459 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f7896444c-h22fk" event={"ID":"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91","Type":"ContainerStarted","Data":"20babdd4c91262269ff14736817b3d8fb33a4a85b1f98c7183b4d8c924cc64cd"} Dec 01 05:19:19 crc kubenswrapper[4880]: I1201 05:19:19.669077 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f7896444c-h22fk" event={"ID":"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91","Type":"ContainerStarted","Data":"aad4357536e919b2ce8bf583a76758641932ed432479b7d35bf9722d5a716037"} Dec 01 05:19:19 crc kubenswrapper[4880]: I1201 05:19:19.669087 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f7896444c-h22fk" event={"ID":"a9ca22ff-e70c-4883-b86a-d2f7c3a75d91","Type":"ContainerStarted","Data":"374fe8784f47e771c065c29001f9a5f1a0ea2838a3fa68787ae879f2109f29fa"} Dec 01 05:19:19 crc kubenswrapper[4880]: I1201 05:19:19.669100 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:19 crc kubenswrapper[4880]: I1201 05:19:19.689208 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f7896444c-h22fk" podStartSLOduration=1.6891872879999998 podStartE2EDuration="1.689187288s" podCreationTimestamp="2025-12-01 05:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 05:19:19.683603741 +0000 UTC m=+8589.194858133" watchObservedRunningTime="2025-12-01 05:19:19.689187288 +0000 UTC m=+8589.200441670" Dec 01 05:19:33 crc kubenswrapper[4880]: I1201 05:19:33.784276 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:19:33 crc kubenswrapper[4880]: E1201 05:19:33.784955 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:19:47 crc kubenswrapper[4880]: I1201 05:19:47.784691 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:19:47 crc kubenswrapper[4880]: E1201 05:19:47.785374 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:19:48 crc kubenswrapper[4880]: I1201 05:19:48.841759 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f7896444c-h22fk" Dec 01 05:19:48 crc kubenswrapper[4880]: I1201 05:19:48.986601 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5874f859b7-82pgv"] Dec 01 05:19:48 crc kubenswrapper[4880]: I1201 05:19:48.988337 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5874f859b7-82pgv" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerName="neutron-api" containerID="cri-o://84ff8cf7c9b10f65b33e017516fc50cd8be21fc5f9db57fa0cf51d59403bc16a" gracePeriod=30 Dec 01 05:19:48 crc kubenswrapper[4880]: I1201 05:19:48.996986 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5874f859b7-82pgv" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerName="neutron-httpd" containerID="cri-o://35b1fc4a09d622638d3eb42da8cce8dd8c2da920b72b88ff2050e1d94bb53189" gracePeriod=30 Dec 01 05:19:50 crc kubenswrapper[4880]: I1201 05:19:50.006385 4880 generic.go:334] "Generic (PLEG): container finished" podID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerID="35b1fc4a09d622638d3eb42da8cce8dd8c2da920b72b88ff2050e1d94bb53189" exitCode=0 Dec 01 05:19:50 crc kubenswrapper[4880]: I1201 05:19:50.006409 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5874f859b7-82pgv" event={"ID":"ead09403-3fb6-4417-a9d9-694b3070c66d","Type":"ContainerDied","Data":"35b1fc4a09d622638d3eb42da8cce8dd8c2da920b72b88ff2050e1d94bb53189"} Dec 01 05:19:51 crc kubenswrapper[4880]: I1201 05:19:51.680098 4880 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5874f859b7-82pgv" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.1.14:9696/\": dial tcp 10.217.1.14:9696: connect: connection refused" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.057476 4880 generic.go:334] "Generic (PLEG): container finished" podID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerID="84ff8cf7c9b10f65b33e017516fc50cd8be21fc5f9db57fa0cf51d59403bc16a" exitCode=0 Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.057652 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5874f859b7-82pgv" event={"ID":"ead09403-3fb6-4417-a9d9-694b3070c66d","Type":"ContainerDied","Data":"84ff8cf7c9b10f65b33e017516fc50cd8be21fc5f9db57fa0cf51d59403bc16a"} Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.488727 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5874f859b7-82pgv" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.637822 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2g7m\" (UniqueName: \"kubernetes.io/projected/ead09403-3fb6-4417-a9d9-694b3070c66d-kube-api-access-c2g7m\") pod \"ead09403-3fb6-4417-a9d9-694b3070c66d\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.638998 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-public-tls-certs\") pod \"ead09403-3fb6-4417-a9d9-694b3070c66d\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.639234 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-internal-tls-certs\") pod \"ead09403-3fb6-4417-a9d9-694b3070c66d\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.639265 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-combined-ca-bundle\") pod \"ead09403-3fb6-4417-a9d9-694b3070c66d\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.640169 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-config\") pod \"ead09403-3fb6-4417-a9d9-694b3070c66d\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.640245 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-ovndb-tls-certs\") pod \"ead09403-3fb6-4417-a9d9-694b3070c66d\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.640270 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-httpd-config\") pod \"ead09403-3fb6-4417-a9d9-694b3070c66d\" (UID: \"ead09403-3fb6-4417-a9d9-694b3070c66d\") " Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.648634 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead09403-3fb6-4417-a9d9-694b3070c66d-kube-api-access-c2g7m" (OuterVolumeSpecName: "kube-api-access-c2g7m") pod "ead09403-3fb6-4417-a9d9-694b3070c66d" (UID: "ead09403-3fb6-4417-a9d9-694b3070c66d"). InnerVolumeSpecName "kube-api-access-c2g7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.654864 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ead09403-3fb6-4417-a9d9-694b3070c66d" (UID: "ead09403-3fb6-4417-a9d9-694b3070c66d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.710141 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ead09403-3fb6-4417-a9d9-694b3070c66d" (UID: "ead09403-3fb6-4417-a9d9-694b3070c66d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.712331 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ead09403-3fb6-4417-a9d9-694b3070c66d" (UID: "ead09403-3fb6-4417-a9d9-694b3070c66d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.723056 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ead09403-3fb6-4417-a9d9-694b3070c66d" (UID: "ead09403-3fb6-4417-a9d9-694b3070c66d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.724073 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-config" (OuterVolumeSpecName: "config") pod "ead09403-3fb6-4417-a9d9-694b3070c66d" (UID: "ead09403-3fb6-4417-a9d9-694b3070c66d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.744773 4880 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.744950 4880 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.745033 4880 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-config\") on node \"crc\" DevicePath \"\"" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.745669 4880 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.745774 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2g7m\" (UniqueName: \"kubernetes.io/projected/ead09403-3fb6-4417-a9d9-694b3070c66d-kube-api-access-c2g7m\") on node \"crc\" DevicePath \"\"" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.745859 4880 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.753580 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ead09403-3fb6-4417-a9d9-694b3070c66d" (UID: "ead09403-3fb6-4417-a9d9-694b3070c66d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:19:55 crc kubenswrapper[4880]: I1201 05:19:55.848354 4880 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead09403-3fb6-4417-a9d9-694b3070c66d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 05:19:56 crc kubenswrapper[4880]: I1201 05:19:56.068754 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5874f859b7-82pgv" event={"ID":"ead09403-3fb6-4417-a9d9-694b3070c66d","Type":"ContainerDied","Data":"d5cea0f0b69b07ec884d35cd3b99bbf554ce762527ff3f4ee60de1d5b0f4bd69"} Dec 01 05:19:56 crc kubenswrapper[4880]: I1201 05:19:56.068802 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5874f859b7-82pgv" Dec 01 05:19:56 crc kubenswrapper[4880]: I1201 05:19:56.068816 4880 scope.go:117] "RemoveContainer" containerID="35b1fc4a09d622638d3eb42da8cce8dd8c2da920b72b88ff2050e1d94bb53189" Dec 01 05:19:56 crc kubenswrapper[4880]: I1201 05:19:56.126676 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5874f859b7-82pgv"] Dec 01 05:19:56 crc kubenswrapper[4880]: I1201 05:19:56.133228 4880 scope.go:117] "RemoveContainer" containerID="84ff8cf7c9b10f65b33e017516fc50cd8be21fc5f9db57fa0cf51d59403bc16a" Dec 01 05:19:56 crc kubenswrapper[4880]: I1201 05:19:56.139114 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5874f859b7-82pgv"] Dec 01 05:19:56 crc kubenswrapper[4880]: I1201 05:19:56.798798 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" path="/var/lib/kubelet/pods/ead09403-3fb6-4417-a9d9-694b3070c66d/volumes" Dec 01 05:19:59 crc kubenswrapper[4880]: I1201 05:19:59.784017 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:19:59 crc kubenswrapper[4880]: E1201 05:19:59.784953 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:20:10 crc kubenswrapper[4880]: I1201 05:20:10.798475 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:20:10 crc kubenswrapper[4880]: E1201 05:20:10.799495 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.625024 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zqvh9"] Dec 01 05:20:19 crc kubenswrapper[4880]: E1201 05:20:19.626020 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerName="neutron-api" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.626035 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerName="neutron-api" Dec 01 05:20:19 crc kubenswrapper[4880]: E1201 05:20:19.626054 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerName="neutron-httpd" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.626062 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerName="neutron-httpd" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.626244 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerName="neutron-api" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.626268 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead09403-3fb6-4417-a9d9-694b3070c66d" containerName="neutron-httpd" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.630753 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.652299 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqvh9"] Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.668622 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-utilities\") pod \"redhat-marketplace-zqvh9\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.668690 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-catalog-content\") pod \"redhat-marketplace-zqvh9\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.668746 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb985\" (UniqueName: \"kubernetes.io/projected/8dfbf730-c279-4ef9-807c-3a12c17957c6-kube-api-access-mb985\") pod \"redhat-marketplace-zqvh9\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.770430 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-utilities\") pod \"redhat-marketplace-zqvh9\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.770505 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-catalog-content\") pod \"redhat-marketplace-zqvh9\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.770556 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb985\" (UniqueName: \"kubernetes.io/projected/8dfbf730-c279-4ef9-807c-3a12c17957c6-kube-api-access-mb985\") pod \"redhat-marketplace-zqvh9\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.771109 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-utilities\") pod \"redhat-marketplace-zqvh9\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.771172 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-catalog-content\") pod \"redhat-marketplace-zqvh9\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.797696 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb985\" (UniqueName: \"kubernetes.io/projected/8dfbf730-c279-4ef9-807c-3a12c17957c6-kube-api-access-mb985\") pod \"redhat-marketplace-zqvh9\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:19 crc kubenswrapper[4880]: I1201 05:20:19.953322 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:20 crc kubenswrapper[4880]: I1201 05:20:20.492270 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqvh9"] Dec 01 05:20:21 crc kubenswrapper[4880]: I1201 05:20:21.351603 4880 generic.go:334] "Generic (PLEG): container finished" podID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerID="78b1ca365172e552b6cab0a32390a60f457ad4825c1fe2690d06baf41fbd8593" exitCode=0 Dec 01 05:20:21 crc kubenswrapper[4880]: I1201 05:20:21.351867 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqvh9" event={"ID":"8dfbf730-c279-4ef9-807c-3a12c17957c6","Type":"ContainerDied","Data":"78b1ca365172e552b6cab0a32390a60f457ad4825c1fe2690d06baf41fbd8593"} Dec 01 05:20:21 crc kubenswrapper[4880]: I1201 05:20:21.352023 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqvh9" event={"ID":"8dfbf730-c279-4ef9-807c-3a12c17957c6","Type":"ContainerStarted","Data":"6d28a3cf7f1305d2fe99e175916476d9c85e1bd699e66cf6d76ed7263a0d6b1d"} Dec 01 05:20:21 crc kubenswrapper[4880]: I1201 05:20:21.783919 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:20:21 crc kubenswrapper[4880]: E1201 05:20:21.784270 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:20:23 crc kubenswrapper[4880]: I1201 05:20:23.375995 4880 generic.go:334] "Generic (PLEG): container finished" podID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerID="55c82855d6a1ecd07ed21f27ba00ed5fd3c6a851ab48e497e1b2cbb8ec95412e" exitCode=0 Dec 01 05:20:23 crc kubenswrapper[4880]: I1201 05:20:23.376108 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqvh9" event={"ID":"8dfbf730-c279-4ef9-807c-3a12c17957c6","Type":"ContainerDied","Data":"55c82855d6a1ecd07ed21f27ba00ed5fd3c6a851ab48e497e1b2cbb8ec95412e"} Dec 01 05:20:24 crc kubenswrapper[4880]: I1201 05:20:24.394123 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqvh9" event={"ID":"8dfbf730-c279-4ef9-807c-3a12c17957c6","Type":"ContainerStarted","Data":"5b0e3a906c05d00cb46a61b218700d54b7c92b080f77d4ac97f7939a7c94f880"} Dec 01 05:20:24 crc kubenswrapper[4880]: I1201 05:20:24.426231 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zqvh9" podStartSLOduration=2.913640305 podStartE2EDuration="5.426203656s" podCreationTimestamp="2025-12-01 05:20:19 +0000 UTC" firstStartedPulling="2025-12-01 05:20:21.355220705 +0000 UTC m=+8650.866475097" lastFinishedPulling="2025-12-01 05:20:23.867784046 +0000 UTC m=+8653.379038448" observedRunningTime="2025-12-01 05:20:24.420139548 +0000 UTC m=+8653.931393960" watchObservedRunningTime="2025-12-01 05:20:24.426203656 +0000 UTC m=+8653.937458038" Dec 01 05:20:29 crc kubenswrapper[4880]: I1201 05:20:29.953944 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:29 crc kubenswrapper[4880]: I1201 05:20:29.954342 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:30 crc kubenswrapper[4880]: I1201 05:20:30.002527 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:30 crc kubenswrapper[4880]: I1201 05:20:30.547911 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:30 crc kubenswrapper[4880]: I1201 05:20:30.654224 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqvh9"] Dec 01 05:20:32 crc kubenswrapper[4880]: I1201 05:20:32.491640 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zqvh9" podUID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerName="registry-server" containerID="cri-o://5b0e3a906c05d00cb46a61b218700d54b7c92b080f77d4ac97f7939a7c94f880" gracePeriod=2 Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.504364 4880 generic.go:334] "Generic (PLEG): container finished" podID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerID="5b0e3a906c05d00cb46a61b218700d54b7c92b080f77d4ac97f7939a7c94f880" exitCode=0 Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.504439 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqvh9" event={"ID":"8dfbf730-c279-4ef9-807c-3a12c17957c6","Type":"ContainerDied","Data":"5b0e3a906c05d00cb46a61b218700d54b7c92b080f77d4ac97f7939a7c94f880"} Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.504982 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqvh9" event={"ID":"8dfbf730-c279-4ef9-807c-3a12c17957c6","Type":"ContainerDied","Data":"6d28a3cf7f1305d2fe99e175916476d9c85e1bd699e66cf6d76ed7263a0d6b1d"} Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.504994 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d28a3cf7f1305d2fe99e175916476d9c85e1bd699e66cf6d76ed7263a0d6b1d" Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.572578 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.657548 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-utilities\") pod \"8dfbf730-c279-4ef9-807c-3a12c17957c6\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.657616 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb985\" (UniqueName: \"kubernetes.io/projected/8dfbf730-c279-4ef9-807c-3a12c17957c6-kube-api-access-mb985\") pod \"8dfbf730-c279-4ef9-807c-3a12c17957c6\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.657907 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-catalog-content\") pod \"8dfbf730-c279-4ef9-807c-3a12c17957c6\" (UID: \"8dfbf730-c279-4ef9-807c-3a12c17957c6\") " Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.658541 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-utilities" (OuterVolumeSpecName: "utilities") pod "8dfbf730-c279-4ef9-807c-3a12c17957c6" (UID: "8dfbf730-c279-4ef9-807c-3a12c17957c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.672179 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dfbf730-c279-4ef9-807c-3a12c17957c6-kube-api-access-mb985" (OuterVolumeSpecName: "kube-api-access-mb985") pod "8dfbf730-c279-4ef9-807c-3a12c17957c6" (UID: "8dfbf730-c279-4ef9-807c-3a12c17957c6"). InnerVolumeSpecName "kube-api-access-mb985". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.676797 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8dfbf730-c279-4ef9-807c-3a12c17957c6" (UID: "8dfbf730-c279-4ef9-807c-3a12c17957c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.759841 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.759916 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dfbf730-c279-4ef9-807c-3a12c17957c6-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:20:33 crc kubenswrapper[4880]: I1201 05:20:33.759927 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb985\" (UniqueName: \"kubernetes.io/projected/8dfbf730-c279-4ef9-807c-3a12c17957c6-kube-api-access-mb985\") on node \"crc\" DevicePath \"\"" Dec 01 05:20:34 crc kubenswrapper[4880]: I1201 05:20:34.514952 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqvh9" Dec 01 05:20:34 crc kubenswrapper[4880]: I1201 05:20:34.565038 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqvh9"] Dec 01 05:20:34 crc kubenswrapper[4880]: I1201 05:20:34.574767 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqvh9"] Dec 01 05:20:34 crc kubenswrapper[4880]: I1201 05:20:34.801522 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dfbf730-c279-4ef9-807c-3a12c17957c6" path="/var/lib/kubelet/pods/8dfbf730-c279-4ef9-807c-3a12c17957c6/volumes" Dec 01 05:20:35 crc kubenswrapper[4880]: I1201 05:20:35.785157 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:20:35 crc kubenswrapper[4880]: E1201 05:20:35.785362 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:20:47 crc kubenswrapper[4880]: I1201 05:20:47.784385 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:20:47 crc kubenswrapper[4880]: E1201 05:20:47.785067 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:21:00 crc kubenswrapper[4880]: I1201 05:21:00.789372 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:21:00 crc kubenswrapper[4880]: E1201 05:21:00.790044 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:21:14 crc kubenswrapper[4880]: I1201 05:21:14.785314 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:21:14 crc kubenswrapper[4880]: E1201 05:21:14.786149 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:21:27 crc kubenswrapper[4880]: I1201 05:21:27.784438 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:21:27 crc kubenswrapper[4880]: E1201 05:21:27.785157 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:21:42 crc kubenswrapper[4880]: I1201 05:21:42.791712 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:21:42 crc kubenswrapper[4880]: E1201 05:21:42.805422 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:21:57 crc kubenswrapper[4880]: I1201 05:21:57.785758 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:21:58 crc kubenswrapper[4880]: I1201 05:21:58.369568 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"4895c0ebe62faeb46e0983701ccde4b73c860ade0c3da071a7ae68885c0b352a"} Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.320275 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qv6zr"] Dec 01 05:22:32 crc kubenswrapper[4880]: E1201 05:22:32.321370 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerName="extract-utilities" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.321385 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerName="extract-utilities" Dec 01 05:22:32 crc kubenswrapper[4880]: E1201 05:22:32.321406 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerName="extract-content" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.321413 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerName="extract-content" Dec 01 05:22:32 crc kubenswrapper[4880]: E1201 05:22:32.321430 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerName="registry-server" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.321436 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerName="registry-server" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.321654 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dfbf730-c279-4ef9-807c-3a12c17957c6" containerName="registry-server" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.322990 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qv6zr"] Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.323068 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.428703 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-utilities\") pod \"certified-operators-qv6zr\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.429173 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj4t7\" (UniqueName: \"kubernetes.io/projected/406085d0-cd6d-482f-b495-8b17c7294256-kube-api-access-rj4t7\") pod \"certified-operators-qv6zr\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.429401 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-catalog-content\") pod \"certified-operators-qv6zr\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.531104 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-utilities\") pod \"certified-operators-qv6zr\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.531212 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj4t7\" (UniqueName: \"kubernetes.io/projected/406085d0-cd6d-482f-b495-8b17c7294256-kube-api-access-rj4t7\") pod \"certified-operators-qv6zr\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.531242 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-catalog-content\") pod \"certified-operators-qv6zr\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.532239 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-catalog-content\") pod \"certified-operators-qv6zr\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.532940 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-utilities\") pod \"certified-operators-qv6zr\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.559378 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj4t7\" (UniqueName: \"kubernetes.io/projected/406085d0-cd6d-482f-b495-8b17c7294256-kube-api-access-rj4t7\") pod \"certified-operators-qv6zr\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:32 crc kubenswrapper[4880]: I1201 05:22:32.661294 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:33 crc kubenswrapper[4880]: I1201 05:22:33.142708 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qv6zr"] Dec 01 05:22:33 crc kubenswrapper[4880]: I1201 05:22:33.803620 4880 generic.go:334] "Generic (PLEG): container finished" podID="406085d0-cd6d-482f-b495-8b17c7294256" containerID="f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1" exitCode=0 Dec 01 05:22:33 crc kubenswrapper[4880]: I1201 05:22:33.803703 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv6zr" event={"ID":"406085d0-cd6d-482f-b495-8b17c7294256","Type":"ContainerDied","Data":"f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1"} Dec 01 05:22:33 crc kubenswrapper[4880]: I1201 05:22:33.803884 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv6zr" event={"ID":"406085d0-cd6d-482f-b495-8b17c7294256","Type":"ContainerStarted","Data":"22863233647d205df7ca98af7d43816b569c574ecd4ecc0f20b0f8244d149a82"} Dec 01 05:22:33 crc kubenswrapper[4880]: I1201 05:22:33.806084 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.691777 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8x7dc"] Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.694556 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.759317 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8x7dc"] Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.774723 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6s5b\" (UniqueName: \"kubernetes.io/projected/2b860f4c-c427-48c3-874e-259d0f4b0194-kube-api-access-j6s5b\") pod \"redhat-operators-8x7dc\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.774962 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-catalog-content\") pod \"redhat-operators-8x7dc\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.775123 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-utilities\") pod \"redhat-operators-8x7dc\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.876695 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6s5b\" (UniqueName: \"kubernetes.io/projected/2b860f4c-c427-48c3-874e-259d0f4b0194-kube-api-access-j6s5b\") pod \"redhat-operators-8x7dc\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.876740 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-catalog-content\") pod \"redhat-operators-8x7dc\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.876790 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-utilities\") pod \"redhat-operators-8x7dc\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.877284 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-utilities\") pod \"redhat-operators-8x7dc\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.877930 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-catalog-content\") pod \"redhat-operators-8x7dc\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:34 crc kubenswrapper[4880]: I1201 05:22:34.901621 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6s5b\" (UniqueName: \"kubernetes.io/projected/2b860f4c-c427-48c3-874e-259d0f4b0194-kube-api-access-j6s5b\") pod \"redhat-operators-8x7dc\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:35 crc kubenswrapper[4880]: I1201 05:22:35.025515 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:35 crc kubenswrapper[4880]: W1201 05:22:35.673919 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b860f4c_c427_48c3_874e_259d0f4b0194.slice/crio-4432a1e5dc11f5a3e4b483f373d46b226ddec2bb9ffd38225f3da107a61e9251 WatchSource:0}: Error finding container 4432a1e5dc11f5a3e4b483f373d46b226ddec2bb9ffd38225f3da107a61e9251: Status 404 returned error can't find the container with id 4432a1e5dc11f5a3e4b483f373d46b226ddec2bb9ffd38225f3da107a61e9251 Dec 01 05:22:35 crc kubenswrapper[4880]: I1201 05:22:35.689494 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8x7dc"] Dec 01 05:22:35 crc kubenswrapper[4880]: I1201 05:22:35.825600 4880 generic.go:334] "Generic (PLEG): container finished" podID="406085d0-cd6d-482f-b495-8b17c7294256" containerID="0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce" exitCode=0 Dec 01 05:22:35 crc kubenswrapper[4880]: I1201 05:22:35.825855 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv6zr" event={"ID":"406085d0-cd6d-482f-b495-8b17c7294256","Type":"ContainerDied","Data":"0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce"} Dec 01 05:22:35 crc kubenswrapper[4880]: I1201 05:22:35.826532 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x7dc" event={"ID":"2b860f4c-c427-48c3-874e-259d0f4b0194","Type":"ContainerStarted","Data":"4432a1e5dc11f5a3e4b483f373d46b226ddec2bb9ffd38225f3da107a61e9251"} Dec 01 05:22:36 crc kubenswrapper[4880]: I1201 05:22:36.839174 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv6zr" event={"ID":"406085d0-cd6d-482f-b495-8b17c7294256","Type":"ContainerStarted","Data":"b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9"} Dec 01 05:22:36 crc kubenswrapper[4880]: I1201 05:22:36.844310 4880 generic.go:334] "Generic (PLEG): container finished" podID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerID="2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30" exitCode=0 Dec 01 05:22:36 crc kubenswrapper[4880]: I1201 05:22:36.844547 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x7dc" event={"ID":"2b860f4c-c427-48c3-874e-259d0f4b0194","Type":"ContainerDied","Data":"2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30"} Dec 01 05:22:36 crc kubenswrapper[4880]: I1201 05:22:36.887502 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qv6zr" podStartSLOduration=2.266550763 podStartE2EDuration="4.887485607s" podCreationTimestamp="2025-12-01 05:22:32 +0000 UTC" firstStartedPulling="2025-12-01 05:22:33.805850265 +0000 UTC m=+8783.317104637" lastFinishedPulling="2025-12-01 05:22:36.426785069 +0000 UTC m=+8785.938039481" observedRunningTime="2025-12-01 05:22:36.880449115 +0000 UTC m=+8786.391703487" watchObservedRunningTime="2025-12-01 05:22:36.887485607 +0000 UTC m=+8786.398739979" Dec 01 05:22:38 crc kubenswrapper[4880]: I1201 05:22:38.881284 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x7dc" event={"ID":"2b860f4c-c427-48c3-874e-259d0f4b0194","Type":"ContainerStarted","Data":"1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be"} Dec 01 05:22:40 crc kubenswrapper[4880]: I1201 05:22:40.903299 4880 generic.go:334] "Generic (PLEG): container finished" podID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerID="1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be" exitCode=0 Dec 01 05:22:40 crc kubenswrapper[4880]: I1201 05:22:40.903383 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x7dc" event={"ID":"2b860f4c-c427-48c3-874e-259d0f4b0194","Type":"ContainerDied","Data":"1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be"} Dec 01 05:22:41 crc kubenswrapper[4880]: I1201 05:22:41.914398 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x7dc" event={"ID":"2b860f4c-c427-48c3-874e-259d0f4b0194","Type":"ContainerStarted","Data":"2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668"} Dec 01 05:22:41 crc kubenswrapper[4880]: I1201 05:22:41.940499 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8x7dc" podStartSLOduration=3.33567298 podStartE2EDuration="7.940479112s" podCreationTimestamp="2025-12-01 05:22:34 +0000 UTC" firstStartedPulling="2025-12-01 05:22:36.873828463 +0000 UTC m=+8786.385082835" lastFinishedPulling="2025-12-01 05:22:41.478634605 +0000 UTC m=+8790.989888967" observedRunningTime="2025-12-01 05:22:41.931214075 +0000 UTC m=+8791.442468447" watchObservedRunningTime="2025-12-01 05:22:41.940479112 +0000 UTC m=+8791.451733494" Dec 01 05:22:42 crc kubenswrapper[4880]: I1201 05:22:42.661945 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:42 crc kubenswrapper[4880]: I1201 05:22:42.662211 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:43 crc kubenswrapper[4880]: I1201 05:22:43.712088 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qv6zr" podUID="406085d0-cd6d-482f-b495-8b17c7294256" containerName="registry-server" probeResult="failure" output=< Dec 01 05:22:43 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 05:22:43 crc kubenswrapper[4880]: > Dec 01 05:22:45 crc kubenswrapper[4880]: I1201 05:22:45.026697 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:45 crc kubenswrapper[4880]: I1201 05:22:45.027033 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:22:46 crc kubenswrapper[4880]: I1201 05:22:46.078819 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8x7dc" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="registry-server" probeResult="failure" output=< Dec 01 05:22:46 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 05:22:46 crc kubenswrapper[4880]: > Dec 01 05:22:52 crc kubenswrapper[4880]: I1201 05:22:52.708970 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:52 crc kubenswrapper[4880]: I1201 05:22:52.766134 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:52 crc kubenswrapper[4880]: I1201 05:22:52.958274 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qv6zr"] Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.024466 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qv6zr" podUID="406085d0-cd6d-482f-b495-8b17c7294256" containerName="registry-server" containerID="cri-o://b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9" gracePeriod=2 Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.743283 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.853174 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-catalog-content\") pod \"406085d0-cd6d-482f-b495-8b17c7294256\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.853264 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-utilities\") pod \"406085d0-cd6d-482f-b495-8b17c7294256\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.853483 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj4t7\" (UniqueName: \"kubernetes.io/projected/406085d0-cd6d-482f-b495-8b17c7294256-kube-api-access-rj4t7\") pod \"406085d0-cd6d-482f-b495-8b17c7294256\" (UID: \"406085d0-cd6d-482f-b495-8b17c7294256\") " Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.854626 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-utilities" (OuterVolumeSpecName: "utilities") pod "406085d0-cd6d-482f-b495-8b17c7294256" (UID: "406085d0-cd6d-482f-b495-8b17c7294256"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.883508 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406085d0-cd6d-482f-b495-8b17c7294256-kube-api-access-rj4t7" (OuterVolumeSpecName: "kube-api-access-rj4t7") pod "406085d0-cd6d-482f-b495-8b17c7294256" (UID: "406085d0-cd6d-482f-b495-8b17c7294256"). InnerVolumeSpecName "kube-api-access-rj4t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.917850 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "406085d0-cd6d-482f-b495-8b17c7294256" (UID: "406085d0-cd6d-482f-b495-8b17c7294256"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.958026 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj4t7\" (UniqueName: \"kubernetes.io/projected/406085d0-cd6d-482f-b495-8b17c7294256-kube-api-access-rj4t7\") on node \"crc\" DevicePath \"\"" Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.958074 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:22:54 crc kubenswrapper[4880]: I1201 05:22:54.958083 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406085d0-cd6d-482f-b495-8b17c7294256-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.040076 4880 generic.go:334] "Generic (PLEG): container finished" podID="406085d0-cd6d-482f-b495-8b17c7294256" containerID="b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9" exitCode=0 Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.040124 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv6zr" event={"ID":"406085d0-cd6d-482f-b495-8b17c7294256","Type":"ContainerDied","Data":"b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9"} Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.040150 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv6zr" event={"ID":"406085d0-cd6d-482f-b495-8b17c7294256","Type":"ContainerDied","Data":"22863233647d205df7ca98af7d43816b569c574ecd4ecc0f20b0f8244d149a82"} Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.040360 4880 scope.go:117] "RemoveContainer" containerID="b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.040496 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv6zr" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.105010 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qv6zr"] Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.107425 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qv6zr"] Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.111082 4880 scope.go:117] "RemoveContainer" containerID="0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.132999 4880 scope.go:117] "RemoveContainer" containerID="f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.188523 4880 scope.go:117] "RemoveContainer" containerID="b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9" Dec 01 05:22:55 crc kubenswrapper[4880]: E1201 05:22:55.190389 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9\": container with ID starting with b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9 not found: ID does not exist" containerID="b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.190733 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9"} err="failed to get container status \"b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9\": rpc error: code = NotFound desc = could not find container \"b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9\": container with ID starting with b051b32de86942d67ec471c758a82b9959c8f77dcf56b99c39e60bfb62d3e9d9 not found: ID does not exist" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.190768 4880 scope.go:117] "RemoveContainer" containerID="0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce" Dec 01 05:22:55 crc kubenswrapper[4880]: E1201 05:22:55.191159 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce\": container with ID starting with 0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce not found: ID does not exist" containerID="0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.191252 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce"} err="failed to get container status \"0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce\": rpc error: code = NotFound desc = could not find container \"0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce\": container with ID starting with 0a307ed5a94d8ff029e5b8218b6686eee3371a3e57281fe8dde079475d8096ce not found: ID does not exist" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.191331 4880 scope.go:117] "RemoveContainer" containerID="f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1" Dec 01 05:22:55 crc kubenswrapper[4880]: E1201 05:22:55.191827 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1\": container with ID starting with f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1 not found: ID does not exist" containerID="f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1" Dec 01 05:22:55 crc kubenswrapper[4880]: I1201 05:22:55.191914 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1"} err="failed to get container status \"f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1\": rpc error: code = NotFound desc = could not find container \"f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1\": container with ID starting with f54f00117629cbcd70acd4debaa53e980d183d55ee90655068dd99546a3af7c1 not found: ID does not exist" Dec 01 05:22:56 crc kubenswrapper[4880]: I1201 05:22:56.097979 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8x7dc" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="registry-server" probeResult="failure" output=< Dec 01 05:22:56 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 05:22:56 crc kubenswrapper[4880]: > Dec 01 05:22:56 crc kubenswrapper[4880]: I1201 05:22:56.801574 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406085d0-cd6d-482f-b495-8b17c7294256" path="/var/lib/kubelet/pods/406085d0-cd6d-482f-b495-8b17c7294256/volumes" Dec 01 05:23:05 crc kubenswrapper[4880]: I1201 05:23:05.079236 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:23:05 crc kubenswrapper[4880]: I1201 05:23:05.137186 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:23:05 crc kubenswrapper[4880]: I1201 05:23:05.898652 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8x7dc"] Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.178451 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8x7dc" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="registry-server" containerID="cri-o://2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668" gracePeriod=2 Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.720545 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.809760 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-utilities\") pod \"2b860f4c-c427-48c3-874e-259d0f4b0194\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.810364 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-catalog-content\") pod \"2b860f4c-c427-48c3-874e-259d0f4b0194\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.810509 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-utilities" (OuterVolumeSpecName: "utilities") pod "2b860f4c-c427-48c3-874e-259d0f4b0194" (UID: "2b860f4c-c427-48c3-874e-259d0f4b0194"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.810520 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6s5b\" (UniqueName: \"kubernetes.io/projected/2b860f4c-c427-48c3-874e-259d0f4b0194-kube-api-access-j6s5b\") pod \"2b860f4c-c427-48c3-874e-259d0f4b0194\" (UID: \"2b860f4c-c427-48c3-874e-259d0f4b0194\") " Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.811517 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.831532 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b860f4c-c427-48c3-874e-259d0f4b0194-kube-api-access-j6s5b" (OuterVolumeSpecName: "kube-api-access-j6s5b") pod "2b860f4c-c427-48c3-874e-259d0f4b0194" (UID: "2b860f4c-c427-48c3-874e-259d0f4b0194"). InnerVolumeSpecName "kube-api-access-j6s5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.913558 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6s5b\" (UniqueName: \"kubernetes.io/projected/2b860f4c-c427-48c3-874e-259d0f4b0194-kube-api-access-j6s5b\") on node \"crc\" DevicePath \"\"" Dec 01 05:23:06 crc kubenswrapper[4880]: I1201 05:23:06.923767 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b860f4c-c427-48c3-874e-259d0f4b0194" (UID: "2b860f4c-c427-48c3-874e-259d0f4b0194"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.016272 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b860f4c-c427-48c3-874e-259d0f4b0194-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.191077 4880 generic.go:334] "Generic (PLEG): container finished" podID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerID="2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668" exitCode=0 Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.191308 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x7dc" event={"ID":"2b860f4c-c427-48c3-874e-259d0f4b0194","Type":"ContainerDied","Data":"2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668"} Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.192494 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8x7dc" event={"ID":"2b860f4c-c427-48c3-874e-259d0f4b0194","Type":"ContainerDied","Data":"4432a1e5dc11f5a3e4b483f373d46b226ddec2bb9ffd38225f3da107a61e9251"} Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.191417 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8x7dc" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.192547 4880 scope.go:117] "RemoveContainer" containerID="2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.228473 4880 scope.go:117] "RemoveContainer" containerID="1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.260728 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8x7dc"] Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.272428 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8x7dc"] Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.277507 4880 scope.go:117] "RemoveContainer" containerID="2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.311115 4880 scope.go:117] "RemoveContainer" containerID="2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668" Dec 01 05:23:07 crc kubenswrapper[4880]: E1201 05:23:07.311629 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668\": container with ID starting with 2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668 not found: ID does not exist" containerID="2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.311657 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668"} err="failed to get container status \"2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668\": rpc error: code = NotFound desc = could not find container \"2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668\": container with ID starting with 2455987bd49b81da73c962b78bec2c4e1855b2a93b59327cdbb3583277e90668 not found: ID does not exist" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.311676 4880 scope.go:117] "RemoveContainer" containerID="1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be" Dec 01 05:23:07 crc kubenswrapper[4880]: E1201 05:23:07.312330 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be\": container with ID starting with 1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be not found: ID does not exist" containerID="1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.312358 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be"} err="failed to get container status \"1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be\": rpc error: code = NotFound desc = could not find container \"1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be\": container with ID starting with 1592aa609744568da4cb0e9cc6054c4226c89f3ad5007a2ef62a7a3adfa3e1be not found: ID does not exist" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.312375 4880 scope.go:117] "RemoveContainer" containerID="2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30" Dec 01 05:23:07 crc kubenswrapper[4880]: E1201 05:23:07.312626 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30\": container with ID starting with 2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30 not found: ID does not exist" containerID="2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30" Dec 01 05:23:07 crc kubenswrapper[4880]: I1201 05:23:07.312651 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30"} err="failed to get container status \"2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30\": rpc error: code = NotFound desc = could not find container \"2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30\": container with ID starting with 2ca1c4f8cce8e08108d81d716f44e50920557ca7059eb5b1cf53dcad40fd3e30 not found: ID does not exist" Dec 01 05:23:08 crc kubenswrapper[4880]: I1201 05:23:08.801844 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" path="/var/lib/kubelet/pods/2b860f4c-c427-48c3-874e-259d0f4b0194/volumes" Dec 01 05:24:17 crc kubenswrapper[4880]: I1201 05:24:17.369234 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:24:17 crc kubenswrapper[4880]: I1201 05:24:17.369936 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:24:47 crc kubenswrapper[4880]: I1201 05:24:47.369585 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:24:47 crc kubenswrapper[4880]: I1201 05:24:47.370390 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:25:17 crc kubenswrapper[4880]: I1201 05:25:17.369560 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:25:17 crc kubenswrapper[4880]: I1201 05:25:17.370087 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:25:17 crc kubenswrapper[4880]: I1201 05:25:17.370138 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 05:25:17 crc kubenswrapper[4880]: I1201 05:25:17.371090 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4895c0ebe62faeb46e0983701ccde4b73c860ade0c3da071a7ae68885c0b352a"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 05:25:17 crc kubenswrapper[4880]: I1201 05:25:17.371216 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://4895c0ebe62faeb46e0983701ccde4b73c860ade0c3da071a7ae68885c0b352a" gracePeriod=600 Dec 01 05:25:17 crc kubenswrapper[4880]: I1201 05:25:17.689679 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="4895c0ebe62faeb46e0983701ccde4b73c860ade0c3da071a7ae68885c0b352a" exitCode=0 Dec 01 05:25:17 crc kubenswrapper[4880]: I1201 05:25:17.689760 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"4895c0ebe62faeb46e0983701ccde4b73c860ade0c3da071a7ae68885c0b352a"} Dec 01 05:25:17 crc kubenswrapper[4880]: I1201 05:25:17.690034 4880 scope.go:117] "RemoveContainer" containerID="bc5f01b54af81178ce6bda7478370bd19a9aea7760453384750d440765d9baa2" Dec 01 05:25:18 crc kubenswrapper[4880]: I1201 05:25:18.700800 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e"} Dec 01 05:26:46 crc kubenswrapper[4880]: I1201 05:26:46.713614 4880 scope.go:117] "RemoveContainer" containerID="55c82855d6a1ecd07ed21f27ba00ed5fd3c6a851ab48e497e1b2cbb8ec95412e" Dec 01 05:26:46 crc kubenswrapper[4880]: I1201 05:26:46.769185 4880 scope.go:117] "RemoveContainer" containerID="5b0e3a906c05d00cb46a61b218700d54b7c92b080f77d4ac97f7939a7c94f880" Dec 01 05:26:46 crc kubenswrapper[4880]: I1201 05:26:46.841130 4880 scope.go:117] "RemoveContainer" containerID="78b1ca365172e552b6cab0a32390a60f457ad4825c1fe2690d06baf41fbd8593" Dec 01 05:27:17 crc kubenswrapper[4880]: I1201 05:27:17.368763 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:27:17 crc kubenswrapper[4880]: I1201 05:27:17.371028 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.688109 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q2hx2"] Dec 01 05:27:36 crc kubenswrapper[4880]: E1201 05:27:36.690026 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="extract-utilities" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.690063 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="extract-utilities" Dec 01 05:27:36 crc kubenswrapper[4880]: E1201 05:27:36.690094 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="extract-content" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.690110 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="extract-content" Dec 01 05:27:36 crc kubenswrapper[4880]: E1201 05:27:36.690134 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="registry-server" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.690151 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="registry-server" Dec 01 05:27:36 crc kubenswrapper[4880]: E1201 05:27:36.690186 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406085d0-cd6d-482f-b495-8b17c7294256" containerName="registry-server" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.690200 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="406085d0-cd6d-482f-b495-8b17c7294256" containerName="registry-server" Dec 01 05:27:36 crc kubenswrapper[4880]: E1201 05:27:36.690245 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406085d0-cd6d-482f-b495-8b17c7294256" containerName="extract-utilities" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.690260 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="406085d0-cd6d-482f-b495-8b17c7294256" containerName="extract-utilities" Dec 01 05:27:36 crc kubenswrapper[4880]: E1201 05:27:36.690288 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406085d0-cd6d-482f-b495-8b17c7294256" containerName="extract-content" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.690304 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="406085d0-cd6d-482f-b495-8b17c7294256" containerName="extract-content" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.690929 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="406085d0-cd6d-482f-b495-8b17c7294256" containerName="registry-server" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.690960 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b860f4c-c427-48c3-874e-259d0f4b0194" containerName="registry-server" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.696192 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.709017 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2hx2"] Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.821080 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znjsx\" (UniqueName: \"kubernetes.io/projected/cbaf6029-822b-4869-8333-7f3d54ccda07-kube-api-access-znjsx\") pod \"community-operators-q2hx2\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.821316 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-catalog-content\") pod \"community-operators-q2hx2\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.821391 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-utilities\") pod \"community-operators-q2hx2\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.923350 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-catalog-content\") pod \"community-operators-q2hx2\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.923650 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-utilities\") pod \"community-operators-q2hx2\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.923755 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znjsx\" (UniqueName: \"kubernetes.io/projected/cbaf6029-822b-4869-8333-7f3d54ccda07-kube-api-access-znjsx\") pod \"community-operators-q2hx2\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.924981 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-catalog-content\") pod \"community-operators-q2hx2\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.925316 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-utilities\") pod \"community-operators-q2hx2\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:36 crc kubenswrapper[4880]: I1201 05:27:36.953701 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znjsx\" (UniqueName: \"kubernetes.io/projected/cbaf6029-822b-4869-8333-7f3d54ccda07-kube-api-access-znjsx\") pod \"community-operators-q2hx2\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:37 crc kubenswrapper[4880]: I1201 05:27:37.029780 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:37 crc kubenswrapper[4880]: I1201 05:27:37.530842 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2hx2"] Dec 01 05:27:38 crc kubenswrapper[4880]: I1201 05:27:38.187900 4880 generic.go:334] "Generic (PLEG): container finished" podID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerID="cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2" exitCode=0 Dec 01 05:27:38 crc kubenswrapper[4880]: I1201 05:27:38.187972 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2hx2" event={"ID":"cbaf6029-822b-4869-8333-7f3d54ccda07","Type":"ContainerDied","Data":"cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2"} Dec 01 05:27:38 crc kubenswrapper[4880]: I1201 05:27:38.188042 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2hx2" event={"ID":"cbaf6029-822b-4869-8333-7f3d54ccda07","Type":"ContainerStarted","Data":"f941ba04c059ea06c7bd3eee58f59d0742b56a39b013221b17f1dcdd97be1db0"} Dec 01 05:27:38 crc kubenswrapper[4880]: I1201 05:27:38.190748 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 05:27:39 crc kubenswrapper[4880]: I1201 05:27:39.199833 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2hx2" event={"ID":"cbaf6029-822b-4869-8333-7f3d54ccda07","Type":"ContainerStarted","Data":"e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744"} Dec 01 05:27:40 crc kubenswrapper[4880]: I1201 05:27:40.208295 4880 generic.go:334] "Generic (PLEG): container finished" podID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerID="e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744" exitCode=0 Dec 01 05:27:40 crc kubenswrapper[4880]: I1201 05:27:40.208473 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2hx2" event={"ID":"cbaf6029-822b-4869-8333-7f3d54ccda07","Type":"ContainerDied","Data":"e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744"} Dec 01 05:27:41 crc kubenswrapper[4880]: I1201 05:27:41.220044 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2hx2" event={"ID":"cbaf6029-822b-4869-8333-7f3d54ccda07","Type":"ContainerStarted","Data":"76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf"} Dec 01 05:27:41 crc kubenswrapper[4880]: I1201 05:27:41.238431 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q2hx2" podStartSLOduration=2.701351802 podStartE2EDuration="5.238418333s" podCreationTimestamp="2025-12-01 05:27:36 +0000 UTC" firstStartedPulling="2025-12-01 05:27:38.189798269 +0000 UTC m=+9087.701052681" lastFinishedPulling="2025-12-01 05:27:40.7268648 +0000 UTC m=+9090.238119212" observedRunningTime="2025-12-01 05:27:41.232664352 +0000 UTC m=+9090.743918734" watchObservedRunningTime="2025-12-01 05:27:41.238418333 +0000 UTC m=+9090.749672705" Dec 01 05:27:47 crc kubenswrapper[4880]: I1201 05:27:47.032119 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:47 crc kubenswrapper[4880]: I1201 05:27:47.032572 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:47 crc kubenswrapper[4880]: I1201 05:27:47.106155 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:47 crc kubenswrapper[4880]: I1201 05:27:47.369113 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:27:47 crc kubenswrapper[4880]: I1201 05:27:47.369484 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:27:47 crc kubenswrapper[4880]: I1201 05:27:47.388074 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:47 crc kubenswrapper[4880]: I1201 05:27:47.665772 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2hx2"] Dec 01 05:27:49 crc kubenswrapper[4880]: I1201 05:27:49.344636 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q2hx2" podUID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerName="registry-server" containerID="cri-o://76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf" gracePeriod=2 Dec 01 05:27:49 crc kubenswrapper[4880]: I1201 05:27:49.835326 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:49 crc kubenswrapper[4880]: I1201 05:27:49.938822 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-catalog-content\") pod \"cbaf6029-822b-4869-8333-7f3d54ccda07\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " Dec 01 05:27:49 crc kubenswrapper[4880]: I1201 05:27:49.939253 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znjsx\" (UniqueName: \"kubernetes.io/projected/cbaf6029-822b-4869-8333-7f3d54ccda07-kube-api-access-znjsx\") pod \"cbaf6029-822b-4869-8333-7f3d54ccda07\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " Dec 01 05:27:49 crc kubenswrapper[4880]: I1201 05:27:49.939276 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-utilities\") pod \"cbaf6029-822b-4869-8333-7f3d54ccda07\" (UID: \"cbaf6029-822b-4869-8333-7f3d54ccda07\") " Dec 01 05:27:49 crc kubenswrapper[4880]: I1201 05:27:49.941020 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-utilities" (OuterVolumeSpecName: "utilities") pod "cbaf6029-822b-4869-8333-7f3d54ccda07" (UID: "cbaf6029-822b-4869-8333-7f3d54ccda07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:27:49 crc kubenswrapper[4880]: I1201 05:27:49.958768 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbaf6029-822b-4869-8333-7f3d54ccda07-kube-api-access-znjsx" (OuterVolumeSpecName: "kube-api-access-znjsx") pod "cbaf6029-822b-4869-8333-7f3d54ccda07" (UID: "cbaf6029-822b-4869-8333-7f3d54ccda07"). InnerVolumeSpecName "kube-api-access-znjsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:27:49 crc kubenswrapper[4880]: I1201 05:27:49.993455 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbaf6029-822b-4869-8333-7f3d54ccda07" (UID: "cbaf6029-822b-4869-8333-7f3d54ccda07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.041840 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.041964 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znjsx\" (UniqueName: \"kubernetes.io/projected/cbaf6029-822b-4869-8333-7f3d54ccda07-kube-api-access-znjsx\") on node \"crc\" DevicePath \"\"" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.041982 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbaf6029-822b-4869-8333-7f3d54ccda07-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.357644 4880 generic.go:334] "Generic (PLEG): container finished" podID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerID="76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf" exitCode=0 Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.357690 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2hx2" event={"ID":"cbaf6029-822b-4869-8333-7f3d54ccda07","Type":"ContainerDied","Data":"76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf"} Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.357722 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2hx2" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.357758 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2hx2" event={"ID":"cbaf6029-822b-4869-8333-7f3d54ccda07","Type":"ContainerDied","Data":"f941ba04c059ea06c7bd3eee58f59d0742b56a39b013221b17f1dcdd97be1db0"} Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.357791 4880 scope.go:117] "RemoveContainer" containerID="76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.408339 4880 scope.go:117] "RemoveContainer" containerID="e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.412104 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2hx2"] Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.437742 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q2hx2"] Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.456773 4880 scope.go:117] "RemoveContainer" containerID="cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.495288 4880 scope.go:117] "RemoveContainer" containerID="76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf" Dec 01 05:27:50 crc kubenswrapper[4880]: E1201 05:27:50.495693 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf\": container with ID starting with 76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf not found: ID does not exist" containerID="76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.495723 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf"} err="failed to get container status \"76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf\": rpc error: code = NotFound desc = could not find container \"76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf\": container with ID starting with 76286e37095c531f0bdae7e21ecf85bee7d8710087e68814f38e5e7f3e734bbf not found: ID does not exist" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.495742 4880 scope.go:117] "RemoveContainer" containerID="e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744" Dec 01 05:27:50 crc kubenswrapper[4880]: E1201 05:27:50.496000 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744\": container with ID starting with e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744 not found: ID does not exist" containerID="e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.496018 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744"} err="failed to get container status \"e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744\": rpc error: code = NotFound desc = could not find container \"e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744\": container with ID starting with e3f9a206fca5b89fac9e1108cbc31adcd0c4731dd350e15637e141ca48f27744 not found: ID does not exist" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.496035 4880 scope.go:117] "RemoveContainer" containerID="cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2" Dec 01 05:27:50 crc kubenswrapper[4880]: E1201 05:27:50.496211 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2\": container with ID starting with cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2 not found: ID does not exist" containerID="cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.496230 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2"} err="failed to get container status \"cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2\": rpc error: code = NotFound desc = could not find container \"cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2\": container with ID starting with cf4cf58321068d270b8816635d8533975162ae9cafd2b6c4babec83b1e6deab2 not found: ID does not exist" Dec 01 05:27:50 crc kubenswrapper[4880]: I1201 05:27:50.807464 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbaf6029-822b-4869-8333-7f3d54ccda07" path="/var/lib/kubelet/pods/cbaf6029-822b-4869-8333-7f3d54ccda07/volumes" Dec 01 05:28:17 crc kubenswrapper[4880]: I1201 05:28:17.368972 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:28:17 crc kubenswrapper[4880]: I1201 05:28:17.369618 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:28:17 crc kubenswrapper[4880]: I1201 05:28:17.369691 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 05:28:17 crc kubenswrapper[4880]: I1201 05:28:17.370792 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 05:28:17 crc kubenswrapper[4880]: I1201 05:28:17.370923 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" gracePeriod=600 Dec 01 05:28:17 crc kubenswrapper[4880]: E1201 05:28:17.502370 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:28:17 crc kubenswrapper[4880]: I1201 05:28:17.662233 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" exitCode=0 Dec 01 05:28:17 crc kubenswrapper[4880]: I1201 05:28:17.662300 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e"} Dec 01 05:28:17 crc kubenswrapper[4880]: I1201 05:28:17.662389 4880 scope.go:117] "RemoveContainer" containerID="4895c0ebe62faeb46e0983701ccde4b73c860ade0c3da071a7ae68885c0b352a" Dec 01 05:28:17 crc kubenswrapper[4880]: I1201 05:28:17.663063 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:28:17 crc kubenswrapper[4880]: E1201 05:28:17.663355 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:28:32 crc kubenswrapper[4880]: I1201 05:28:32.788991 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:28:32 crc kubenswrapper[4880]: E1201 05:28:32.790544 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:28:45 crc kubenswrapper[4880]: I1201 05:28:45.784501 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:28:45 crc kubenswrapper[4880]: E1201 05:28:45.785549 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:28:56 crc kubenswrapper[4880]: I1201 05:28:56.784570 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:28:56 crc kubenswrapper[4880]: E1201 05:28:56.785990 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:29:07 crc kubenswrapper[4880]: I1201 05:29:07.784961 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:29:07 crc kubenswrapper[4880]: E1201 05:29:07.785932 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:29:19 crc kubenswrapper[4880]: I1201 05:29:19.783862 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:29:19 crc kubenswrapper[4880]: E1201 05:29:19.785337 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:29:32 crc kubenswrapper[4880]: I1201 05:29:32.783768 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:29:32 crc kubenswrapper[4880]: E1201 05:29:32.784731 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:29:44 crc kubenswrapper[4880]: I1201 05:29:44.786704 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:29:44 crc kubenswrapper[4880]: E1201 05:29:44.787710 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:29:57 crc kubenswrapper[4880]: I1201 05:29:57.784580 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:29:57 crc kubenswrapper[4880]: E1201 05:29:57.785706 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.154080 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv"] Dec 01 05:30:00 crc kubenswrapper[4880]: E1201 05:30:00.154701 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerName="extract-utilities" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.154712 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerName="extract-utilities" Dec 01 05:30:00 crc kubenswrapper[4880]: E1201 05:30:00.154739 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerName="registry-server" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.154745 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerName="registry-server" Dec 01 05:30:00 crc kubenswrapper[4880]: E1201 05:30:00.154752 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerName="extract-content" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.154759 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerName="extract-content" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.154973 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbaf6029-822b-4869-8333-7f3d54ccda07" containerName="registry-server" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.155580 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.160991 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.163155 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.169457 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv"] Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.334006 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f6f7f-3543-4dd4-865e-29320542c3b3-secret-volume\") pod \"collect-profiles-29409450-j2gqv\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.334092 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pczpl\" (UniqueName: \"kubernetes.io/projected/f88f6f7f-3543-4dd4-865e-29320542c3b3-kube-api-access-pczpl\") pod \"collect-profiles-29409450-j2gqv\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.334159 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f6f7f-3543-4dd4-865e-29320542c3b3-config-volume\") pod \"collect-profiles-29409450-j2gqv\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.436338 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f6f7f-3543-4dd4-865e-29320542c3b3-secret-volume\") pod \"collect-profiles-29409450-j2gqv\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.436421 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pczpl\" (UniqueName: \"kubernetes.io/projected/f88f6f7f-3543-4dd4-865e-29320542c3b3-kube-api-access-pczpl\") pod \"collect-profiles-29409450-j2gqv\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.436489 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f6f7f-3543-4dd4-865e-29320542c3b3-config-volume\") pod \"collect-profiles-29409450-j2gqv\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.437675 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f6f7f-3543-4dd4-865e-29320542c3b3-config-volume\") pod \"collect-profiles-29409450-j2gqv\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.442982 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f6f7f-3543-4dd4-865e-29320542c3b3-secret-volume\") pod \"collect-profiles-29409450-j2gqv\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.464121 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pczpl\" (UniqueName: \"kubernetes.io/projected/f88f6f7f-3543-4dd4-865e-29320542c3b3-kube-api-access-pczpl\") pod \"collect-profiles-29409450-j2gqv\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:00 crc kubenswrapper[4880]: I1201 05:30:00.475845 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:01 crc kubenswrapper[4880]: I1201 05:30:01.006932 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv"] Dec 01 05:30:01 crc kubenswrapper[4880]: I1201 05:30:01.828040 4880 generic.go:334] "Generic (PLEG): container finished" podID="f88f6f7f-3543-4dd4-865e-29320542c3b3" containerID="d56a5f77ddf2baf65d411f8bedfff8705d5eed995087cde96b16d81240fe3798" exitCode=0 Dec 01 05:30:01 crc kubenswrapper[4880]: I1201 05:30:01.828114 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" event={"ID":"f88f6f7f-3543-4dd4-865e-29320542c3b3","Type":"ContainerDied","Data":"d56a5f77ddf2baf65d411f8bedfff8705d5eed995087cde96b16d81240fe3798"} Dec 01 05:30:01 crc kubenswrapper[4880]: I1201 05:30:01.828372 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" event={"ID":"f88f6f7f-3543-4dd4-865e-29320542c3b3","Type":"ContainerStarted","Data":"c0b98272bf5a9fa09007344a0365efb3b2b80299d9abf53529e5721f25e5e1ca"} Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.246108 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.303191 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f6f7f-3543-4dd4-865e-29320542c3b3-config-volume\") pod \"f88f6f7f-3543-4dd4-865e-29320542c3b3\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.303323 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f6f7f-3543-4dd4-865e-29320542c3b3-secret-volume\") pod \"f88f6f7f-3543-4dd4-865e-29320542c3b3\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.303345 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pczpl\" (UniqueName: \"kubernetes.io/projected/f88f6f7f-3543-4dd4-865e-29320542c3b3-kube-api-access-pczpl\") pod \"f88f6f7f-3543-4dd4-865e-29320542c3b3\" (UID: \"f88f6f7f-3543-4dd4-865e-29320542c3b3\") " Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.304066 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f88f6f7f-3543-4dd4-865e-29320542c3b3-config-volume" (OuterVolumeSpecName: "config-volume") pod "f88f6f7f-3543-4dd4-865e-29320542c3b3" (UID: "f88f6f7f-3543-4dd4-865e-29320542c3b3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.311392 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88f6f7f-3543-4dd4-865e-29320542c3b3-kube-api-access-pczpl" (OuterVolumeSpecName: "kube-api-access-pczpl") pod "f88f6f7f-3543-4dd4-865e-29320542c3b3" (UID: "f88f6f7f-3543-4dd4-865e-29320542c3b3"). InnerVolumeSpecName "kube-api-access-pczpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.316099 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88f6f7f-3543-4dd4-865e-29320542c3b3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f88f6f7f-3543-4dd4-865e-29320542c3b3" (UID: "f88f6f7f-3543-4dd4-865e-29320542c3b3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.405326 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f88f6f7f-3543-4dd4-865e-29320542c3b3-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.405357 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pczpl\" (UniqueName: \"kubernetes.io/projected/f88f6f7f-3543-4dd4-865e-29320542c3b3-kube-api-access-pczpl\") on node \"crc\" DevicePath \"\"" Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.405366 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f88f6f7f-3543-4dd4-865e-29320542c3b3-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.851534 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" event={"ID":"f88f6f7f-3543-4dd4-865e-29320542c3b3","Type":"ContainerDied","Data":"c0b98272bf5a9fa09007344a0365efb3b2b80299d9abf53529e5721f25e5e1ca"} Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.851890 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b98272bf5a9fa09007344a0365efb3b2b80299d9abf53529e5721f25e5e1ca" Dec 01 05:30:03 crc kubenswrapper[4880]: I1201 05:30:03.851623 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409450-j2gqv" Dec 01 05:30:04 crc kubenswrapper[4880]: I1201 05:30:04.368356 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv"] Dec 01 05:30:04 crc kubenswrapper[4880]: I1201 05:30:04.376367 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409405-587zv"] Dec 01 05:30:04 crc kubenswrapper[4880]: I1201 05:30:04.795681 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69df4749-538a-4da6-ab2f-99dd1d6b8d55" path="/var/lib/kubelet/pods/69df4749-538a-4da6-ab2f-99dd1d6b8d55/volumes" Dec 01 05:30:10 crc kubenswrapper[4880]: I1201 05:30:10.826239 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:30:10 crc kubenswrapper[4880]: E1201 05:30:10.827001 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:30:25 crc kubenswrapper[4880]: I1201 05:30:25.784349 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:30:25 crc kubenswrapper[4880]: E1201 05:30:25.785188 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.108113 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ncf64"] Dec 01 05:30:34 crc kubenswrapper[4880]: E1201 05:30:34.109483 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88f6f7f-3543-4dd4-865e-29320542c3b3" containerName="collect-profiles" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.109508 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88f6f7f-3543-4dd4-865e-29320542c3b3" containerName="collect-profiles" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.109906 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88f6f7f-3543-4dd4-865e-29320542c3b3" containerName="collect-profiles" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.111938 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.130705 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncf64"] Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.135373 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6fwm\" (UniqueName: \"kubernetes.io/projected/f8abddd6-3055-4117-b6ac-238a8ae98c19-kube-api-access-c6fwm\") pod \"redhat-marketplace-ncf64\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.135510 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-utilities\") pod \"redhat-marketplace-ncf64\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.135543 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-catalog-content\") pod \"redhat-marketplace-ncf64\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.237122 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6fwm\" (UniqueName: \"kubernetes.io/projected/f8abddd6-3055-4117-b6ac-238a8ae98c19-kube-api-access-c6fwm\") pod \"redhat-marketplace-ncf64\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.237222 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-utilities\") pod \"redhat-marketplace-ncf64\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.237250 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-catalog-content\") pod \"redhat-marketplace-ncf64\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.237998 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-catalog-content\") pod \"redhat-marketplace-ncf64\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.238327 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-utilities\") pod \"redhat-marketplace-ncf64\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.266600 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6fwm\" (UniqueName: \"kubernetes.io/projected/f8abddd6-3055-4117-b6ac-238a8ae98c19-kube-api-access-c6fwm\") pod \"redhat-marketplace-ncf64\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.429290 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:34 crc kubenswrapper[4880]: I1201 05:30:34.999247 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncf64"] Dec 01 05:30:35 crc kubenswrapper[4880]: I1201 05:30:35.181069 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncf64" event={"ID":"f8abddd6-3055-4117-b6ac-238a8ae98c19","Type":"ContainerStarted","Data":"485b9cace063801b6c779b217a79fe42123a5a1630cba8c027e8bca11206555a"} Dec 01 05:30:36 crc kubenswrapper[4880]: I1201 05:30:36.192451 4880 generic.go:334] "Generic (PLEG): container finished" podID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerID="78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1" exitCode=0 Dec 01 05:30:36 crc kubenswrapper[4880]: I1201 05:30:36.192544 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncf64" event={"ID":"f8abddd6-3055-4117-b6ac-238a8ae98c19","Type":"ContainerDied","Data":"78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1"} Dec 01 05:30:37 crc kubenswrapper[4880]: I1201 05:30:37.783934 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:30:37 crc kubenswrapper[4880]: E1201 05:30:37.784684 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:30:38 crc kubenswrapper[4880]: I1201 05:30:38.211378 4880 generic.go:334] "Generic (PLEG): container finished" podID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerID="bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba" exitCode=0 Dec 01 05:30:38 crc kubenswrapper[4880]: I1201 05:30:38.211428 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncf64" event={"ID":"f8abddd6-3055-4117-b6ac-238a8ae98c19","Type":"ContainerDied","Data":"bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba"} Dec 01 05:30:39 crc kubenswrapper[4880]: I1201 05:30:39.220826 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncf64" event={"ID":"f8abddd6-3055-4117-b6ac-238a8ae98c19","Type":"ContainerStarted","Data":"846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5"} Dec 01 05:30:39 crc kubenswrapper[4880]: I1201 05:30:39.245794 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ncf64" podStartSLOduration=2.609658551 podStartE2EDuration="5.245503391s" podCreationTimestamp="2025-12-01 05:30:34 +0000 UTC" firstStartedPulling="2025-12-01 05:30:36.195632395 +0000 UTC m=+9265.706886787" lastFinishedPulling="2025-12-01 05:30:38.831477245 +0000 UTC m=+9268.342731627" observedRunningTime="2025-12-01 05:30:39.240064988 +0000 UTC m=+9268.751319360" watchObservedRunningTime="2025-12-01 05:30:39.245503391 +0000 UTC m=+9268.756757753" Dec 01 05:30:44 crc kubenswrapper[4880]: I1201 05:30:44.429900 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:44 crc kubenswrapper[4880]: I1201 05:30:44.431363 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:44 crc kubenswrapper[4880]: I1201 05:30:44.518927 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:45 crc kubenswrapper[4880]: I1201 05:30:45.323801 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:45 crc kubenswrapper[4880]: I1201 05:30:45.378672 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncf64"] Dec 01 05:30:46 crc kubenswrapper[4880]: I1201 05:30:46.979541 4880 scope.go:117] "RemoveContainer" containerID="b2c78f2eae30b91f646cb849077d2abc7b92aef20adcc5c71e6fb19cad7a7214" Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.287067 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ncf64" podUID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerName="registry-server" containerID="cri-o://846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5" gracePeriod=2 Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.753598 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.869035 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6fwm\" (UniqueName: \"kubernetes.io/projected/f8abddd6-3055-4117-b6ac-238a8ae98c19-kube-api-access-c6fwm\") pod \"f8abddd6-3055-4117-b6ac-238a8ae98c19\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.869485 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-utilities\") pod \"f8abddd6-3055-4117-b6ac-238a8ae98c19\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.869518 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-catalog-content\") pod \"f8abddd6-3055-4117-b6ac-238a8ae98c19\" (UID: \"f8abddd6-3055-4117-b6ac-238a8ae98c19\") " Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.870061 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-utilities" (OuterVolumeSpecName: "utilities") pod "f8abddd6-3055-4117-b6ac-238a8ae98c19" (UID: "f8abddd6-3055-4117-b6ac-238a8ae98c19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.882289 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8abddd6-3055-4117-b6ac-238a8ae98c19-kube-api-access-c6fwm" (OuterVolumeSpecName: "kube-api-access-c6fwm") pod "f8abddd6-3055-4117-b6ac-238a8ae98c19" (UID: "f8abddd6-3055-4117-b6ac-238a8ae98c19"). InnerVolumeSpecName "kube-api-access-c6fwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.889815 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8abddd6-3055-4117-b6ac-238a8ae98c19" (UID: "f8abddd6-3055-4117-b6ac-238a8ae98c19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.971575 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6fwm\" (UniqueName: \"kubernetes.io/projected/f8abddd6-3055-4117-b6ac-238a8ae98c19-kube-api-access-c6fwm\") on node \"crc\" DevicePath \"\"" Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.971605 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:30:47 crc kubenswrapper[4880]: I1201 05:30:47.971614 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8abddd6-3055-4117-b6ac-238a8ae98c19-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.297605 4880 generic.go:334] "Generic (PLEG): container finished" podID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerID="846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5" exitCode=0 Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.297656 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncf64" event={"ID":"f8abddd6-3055-4117-b6ac-238a8ae98c19","Type":"ContainerDied","Data":"846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5"} Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.297690 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ncf64" event={"ID":"f8abddd6-3055-4117-b6ac-238a8ae98c19","Type":"ContainerDied","Data":"485b9cace063801b6c779b217a79fe42123a5a1630cba8c027e8bca11206555a"} Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.297735 4880 scope.go:117] "RemoveContainer" containerID="846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.299040 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ncf64" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.335030 4880 scope.go:117] "RemoveContainer" containerID="bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.351088 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncf64"] Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.355390 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ncf64"] Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.376389 4880 scope.go:117] "RemoveContainer" containerID="78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.405087 4880 scope.go:117] "RemoveContainer" containerID="846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5" Dec 01 05:30:48 crc kubenswrapper[4880]: E1201 05:30:48.406941 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5\": container with ID starting with 846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5 not found: ID does not exist" containerID="846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.406985 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5"} err="failed to get container status \"846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5\": rpc error: code = NotFound desc = could not find container \"846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5\": container with ID starting with 846cd1a5783098c20c8f4adc89931fdbd8a8cabc4a3eee371c4f46ff6fa5b2a5 not found: ID does not exist" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.407012 4880 scope.go:117] "RemoveContainer" containerID="bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba" Dec 01 05:30:48 crc kubenswrapper[4880]: E1201 05:30:48.407310 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba\": container with ID starting with bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba not found: ID does not exist" containerID="bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.407348 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba"} err="failed to get container status \"bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba\": rpc error: code = NotFound desc = could not find container \"bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba\": container with ID starting with bb56f52d3fbd9a147d37bd9f12b42df9f6d347d55a129f63d8184699ecac43ba not found: ID does not exist" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.407379 4880 scope.go:117] "RemoveContainer" containerID="78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1" Dec 01 05:30:48 crc kubenswrapper[4880]: E1201 05:30:48.407627 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1\": container with ID starting with 78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1 not found: ID does not exist" containerID="78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.407667 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1"} err="failed to get container status \"78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1\": rpc error: code = NotFound desc = could not find container \"78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1\": container with ID starting with 78652d960074fc4eb2fbb008d4fff46db4129f462f47a2b07205cf65e394a7e1 not found: ID does not exist" Dec 01 05:30:48 crc kubenswrapper[4880]: I1201 05:30:48.794255 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8abddd6-3055-4117-b6ac-238a8ae98c19" path="/var/lib/kubelet/pods/f8abddd6-3055-4117-b6ac-238a8ae98c19/volumes" Dec 01 05:30:51 crc kubenswrapper[4880]: I1201 05:30:51.784482 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:30:51 crc kubenswrapper[4880]: E1201 05:30:51.784989 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:31:05 crc kubenswrapper[4880]: I1201 05:31:05.786505 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:31:05 crc kubenswrapper[4880]: E1201 05:31:05.787636 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:31:20 crc kubenswrapper[4880]: I1201 05:31:20.799060 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:31:20 crc kubenswrapper[4880]: E1201 05:31:20.800098 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:31:31 crc kubenswrapper[4880]: I1201 05:31:31.784555 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:31:31 crc kubenswrapper[4880]: E1201 05:31:31.785351 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:31:46 crc kubenswrapper[4880]: I1201 05:31:46.784239 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:31:46 crc kubenswrapper[4880]: E1201 05:31:46.784934 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:32:00 crc kubenswrapper[4880]: I1201 05:32:00.783778 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:32:00 crc kubenswrapper[4880]: E1201 05:32:00.784523 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:32:15 crc kubenswrapper[4880]: I1201 05:32:15.784519 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:32:15 crc kubenswrapper[4880]: E1201 05:32:15.785593 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:32:27 crc kubenswrapper[4880]: I1201 05:32:27.784167 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:32:27 crc kubenswrapper[4880]: E1201 05:32:27.786282 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.555163 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-prkdk"] Dec 01 05:32:36 crc kubenswrapper[4880]: E1201 05:32:36.558377 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerName="registry-server" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.558515 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerName="registry-server" Dec 01 05:32:36 crc kubenswrapper[4880]: E1201 05:32:36.558672 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerName="extract-utilities" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.558775 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerName="extract-utilities" Dec 01 05:32:36 crc kubenswrapper[4880]: E1201 05:32:36.558926 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerName="extract-content" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.559029 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerName="extract-content" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.559416 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8abddd6-3055-4117-b6ac-238a8ae98c19" containerName="registry-server" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.563665 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.587895 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-prkdk"] Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.628321 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-utilities\") pod \"redhat-operators-prkdk\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.628457 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-catalog-content\") pod \"redhat-operators-prkdk\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.628565 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvjj\" (UniqueName: \"kubernetes.io/projected/7679472e-8baf-4ffb-a584-c16c42ab50b9-kube-api-access-ggvjj\") pod \"redhat-operators-prkdk\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.730157 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-utilities\") pod \"redhat-operators-prkdk\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.730240 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-catalog-content\") pod \"redhat-operators-prkdk\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.730298 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggvjj\" (UniqueName: \"kubernetes.io/projected/7679472e-8baf-4ffb-a584-c16c42ab50b9-kube-api-access-ggvjj\") pod \"redhat-operators-prkdk\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.731195 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-catalog-content\") pod \"redhat-operators-prkdk\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.731402 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-utilities\") pod \"redhat-operators-prkdk\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.754709 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggvjj\" (UniqueName: \"kubernetes.io/projected/7679472e-8baf-4ffb-a584-c16c42ab50b9-kube-api-access-ggvjj\") pod \"redhat-operators-prkdk\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:36 crc kubenswrapper[4880]: I1201 05:32:36.894576 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:37 crc kubenswrapper[4880]: I1201 05:32:37.421287 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-prkdk"] Dec 01 05:32:37 crc kubenswrapper[4880]: I1201 05:32:37.512174 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prkdk" event={"ID":"7679472e-8baf-4ffb-a584-c16c42ab50b9","Type":"ContainerStarted","Data":"d09d0645e56672349e3c9196e377d559b679a365f9f538a30b93a630f4b752d3"} Dec 01 05:32:38 crc kubenswrapper[4880]: I1201 05:32:38.521759 4880 generic.go:334] "Generic (PLEG): container finished" podID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerID="cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7" exitCode=0 Dec 01 05:32:38 crc kubenswrapper[4880]: I1201 05:32:38.521811 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prkdk" event={"ID":"7679472e-8baf-4ffb-a584-c16c42ab50b9","Type":"ContainerDied","Data":"cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7"} Dec 01 05:32:38 crc kubenswrapper[4880]: I1201 05:32:38.525775 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 05:32:39 crc kubenswrapper[4880]: I1201 05:32:39.533527 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prkdk" event={"ID":"7679472e-8baf-4ffb-a584-c16c42ab50b9","Type":"ContainerStarted","Data":"4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28"} Dec 01 05:32:42 crc kubenswrapper[4880]: I1201 05:32:42.573392 4880 generic.go:334] "Generic (PLEG): container finished" podID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerID="4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28" exitCode=0 Dec 01 05:32:42 crc kubenswrapper[4880]: I1201 05:32:42.573491 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prkdk" event={"ID":"7679472e-8baf-4ffb-a584-c16c42ab50b9","Type":"ContainerDied","Data":"4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28"} Dec 01 05:32:42 crc kubenswrapper[4880]: I1201 05:32:42.785837 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:32:42 crc kubenswrapper[4880]: E1201 05:32:42.786220 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:32:43 crc kubenswrapper[4880]: I1201 05:32:43.590899 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prkdk" event={"ID":"7679472e-8baf-4ffb-a584-c16c42ab50b9","Type":"ContainerStarted","Data":"ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86"} Dec 01 05:32:43 crc kubenswrapper[4880]: I1201 05:32:43.618300 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-prkdk" podStartSLOduration=2.818514408 podStartE2EDuration="7.618097201s" podCreationTimestamp="2025-12-01 05:32:36 +0000 UTC" firstStartedPulling="2025-12-01 05:32:38.525220815 +0000 UTC m=+9388.036475197" lastFinishedPulling="2025-12-01 05:32:43.324803608 +0000 UTC m=+9392.836057990" observedRunningTime="2025-12-01 05:32:43.615716013 +0000 UTC m=+9393.126970435" watchObservedRunningTime="2025-12-01 05:32:43.618097201 +0000 UTC m=+9393.129351593" Dec 01 05:32:46 crc kubenswrapper[4880]: I1201 05:32:46.895578 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:46 crc kubenswrapper[4880]: I1201 05:32:46.896885 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:32:47 crc kubenswrapper[4880]: I1201 05:32:47.991292 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-prkdk" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="registry-server" probeResult="failure" output=< Dec 01 05:32:47 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 05:32:47 crc kubenswrapper[4880]: > Dec 01 05:32:53 crc kubenswrapper[4880]: I1201 05:32:53.783574 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:32:53 crc kubenswrapper[4880]: E1201 05:32:53.785453 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:32:57 crc kubenswrapper[4880]: I1201 05:32:57.971993 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-prkdk" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="registry-server" probeResult="failure" output=< Dec 01 05:32:57 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 05:32:57 crc kubenswrapper[4880]: > Dec 01 05:33:04 crc kubenswrapper[4880]: I1201 05:33:04.784915 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:33:04 crc kubenswrapper[4880]: E1201 05:33:04.785709 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:33:07 crc kubenswrapper[4880]: I1201 05:33:07.000666 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:33:07 crc kubenswrapper[4880]: I1201 05:33:07.087296 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:33:07 crc kubenswrapper[4880]: I1201 05:33:07.748289 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-prkdk"] Dec 01 05:33:08 crc kubenswrapper[4880]: I1201 05:33:08.863066 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-prkdk" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="registry-server" containerID="cri-o://ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86" gracePeriod=2 Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.418964 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.495429 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-utilities\") pod \"7679472e-8baf-4ffb-a584-c16c42ab50b9\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.495500 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-catalog-content\") pod \"7679472e-8baf-4ffb-a584-c16c42ab50b9\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.495601 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggvjj\" (UniqueName: \"kubernetes.io/projected/7679472e-8baf-4ffb-a584-c16c42ab50b9-kube-api-access-ggvjj\") pod \"7679472e-8baf-4ffb-a584-c16c42ab50b9\" (UID: \"7679472e-8baf-4ffb-a584-c16c42ab50b9\") " Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.496901 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-utilities" (OuterVolumeSpecName: "utilities") pod "7679472e-8baf-4ffb-a584-c16c42ab50b9" (UID: "7679472e-8baf-4ffb-a584-c16c42ab50b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.518490 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7679472e-8baf-4ffb-a584-c16c42ab50b9-kube-api-access-ggvjj" (OuterVolumeSpecName: "kube-api-access-ggvjj") pod "7679472e-8baf-4ffb-a584-c16c42ab50b9" (UID: "7679472e-8baf-4ffb-a584-c16c42ab50b9"). InnerVolumeSpecName "kube-api-access-ggvjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.589463 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7679472e-8baf-4ffb-a584-c16c42ab50b9" (UID: "7679472e-8baf-4ffb-a584-c16c42ab50b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.597485 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggvjj\" (UniqueName: \"kubernetes.io/projected/7679472e-8baf-4ffb-a584-c16c42ab50b9-kube-api-access-ggvjj\") on node \"crc\" DevicePath \"\"" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.597520 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.597530 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7679472e-8baf-4ffb-a584-c16c42ab50b9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.873337 4880 generic.go:334] "Generic (PLEG): container finished" podID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerID="ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86" exitCode=0 Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.873437 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prkdk" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.873438 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prkdk" event={"ID":"7679472e-8baf-4ffb-a584-c16c42ab50b9","Type":"ContainerDied","Data":"ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86"} Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.873845 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prkdk" event={"ID":"7679472e-8baf-4ffb-a584-c16c42ab50b9","Type":"ContainerDied","Data":"d09d0645e56672349e3c9196e377d559b679a365f9f538a30b93a630f4b752d3"} Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.873865 4880 scope.go:117] "RemoveContainer" containerID="ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.896277 4880 scope.go:117] "RemoveContainer" containerID="4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.919905 4880 scope.go:117] "RemoveContainer" containerID="cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.920056 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-prkdk"] Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.929788 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-prkdk"] Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.965671 4880 scope.go:117] "RemoveContainer" containerID="ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86" Dec 01 05:33:09 crc kubenswrapper[4880]: E1201 05:33:09.966381 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86\": container with ID starting with ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86 not found: ID does not exist" containerID="ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.966418 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86"} err="failed to get container status \"ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86\": rpc error: code = NotFound desc = could not find container \"ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86\": container with ID starting with ca2f328dc7643dcae6dfa714640f48a7387f9e1e3074f701c028012aecefbb86 not found: ID does not exist" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.966443 4880 scope.go:117] "RemoveContainer" containerID="4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28" Dec 01 05:33:09 crc kubenswrapper[4880]: E1201 05:33:09.966783 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28\": container with ID starting with 4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28 not found: ID does not exist" containerID="4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.966949 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28"} err="failed to get container status \"4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28\": rpc error: code = NotFound desc = could not find container \"4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28\": container with ID starting with 4a87f36d865e51d864691f3b3f145d5f9aca51fbd4ee45d67023e50d50beab28 not found: ID does not exist" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.967105 4880 scope.go:117] "RemoveContainer" containerID="cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7" Dec 01 05:33:09 crc kubenswrapper[4880]: E1201 05:33:09.967635 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7\": container with ID starting with cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7 not found: ID does not exist" containerID="cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7" Dec 01 05:33:09 crc kubenswrapper[4880]: I1201 05:33:09.967675 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7"} err="failed to get container status \"cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7\": rpc error: code = NotFound desc = could not find container \"cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7\": container with ID starting with cc0f4d3b773ebf95604508db0d28f731829eb2d99eb5a3150302cdc76a6b2df7 not found: ID does not exist" Dec 01 05:33:10 crc kubenswrapper[4880]: I1201 05:33:10.804860 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" path="/var/lib/kubelet/pods/7679472e-8baf-4ffb-a584-c16c42ab50b9/volumes" Dec 01 05:33:17 crc kubenswrapper[4880]: I1201 05:33:17.784622 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:33:18 crc kubenswrapper[4880]: I1201 05:33:18.984833 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"4f2c9655492bc88eae059962d625b57d15cad37507a21a9b7a682908476554a1"} Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.618241 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b8n77"] Dec 01 05:33:38 crc kubenswrapper[4880]: E1201 05:33:38.619258 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="extract-content" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.619273 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="extract-content" Dec 01 05:33:38 crc kubenswrapper[4880]: E1201 05:33:38.619291 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="extract-utilities" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.619299 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="extract-utilities" Dec 01 05:33:38 crc kubenswrapper[4880]: E1201 05:33:38.619327 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="registry-server" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.619333 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="registry-server" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.619525 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="7679472e-8baf-4ffb-a584-c16c42ab50b9" containerName="registry-server" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.620958 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.647137 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8n77"] Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.728035 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-utilities\") pod \"certified-operators-b8n77\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.728205 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfgss\" (UniqueName: \"kubernetes.io/projected/b61192ca-d443-4f52-bf2f-750b82b6faaa-kube-api-access-mfgss\") pod \"certified-operators-b8n77\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.728274 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-catalog-content\") pod \"certified-operators-b8n77\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.831655 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-utilities\") pod \"certified-operators-b8n77\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.832617 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-utilities\") pod \"certified-operators-b8n77\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.832998 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfgss\" (UniqueName: \"kubernetes.io/projected/b61192ca-d443-4f52-bf2f-750b82b6faaa-kube-api-access-mfgss\") pod \"certified-operators-b8n77\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.833238 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-catalog-content\") pod \"certified-operators-b8n77\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.833846 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-catalog-content\") pod \"certified-operators-b8n77\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.853694 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfgss\" (UniqueName: \"kubernetes.io/projected/b61192ca-d443-4f52-bf2f-750b82b6faaa-kube-api-access-mfgss\") pod \"certified-operators-b8n77\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:38 crc kubenswrapper[4880]: I1201 05:33:38.942292 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:39 crc kubenswrapper[4880]: I1201 05:33:39.453622 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8n77"] Dec 01 05:33:40 crc kubenswrapper[4880]: I1201 05:33:40.230977 4880 generic.go:334] "Generic (PLEG): container finished" podID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerID="baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052" exitCode=0 Dec 01 05:33:40 crc kubenswrapper[4880]: I1201 05:33:40.231105 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8n77" event={"ID":"b61192ca-d443-4f52-bf2f-750b82b6faaa","Type":"ContainerDied","Data":"baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052"} Dec 01 05:33:40 crc kubenswrapper[4880]: I1201 05:33:40.231249 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8n77" event={"ID":"b61192ca-d443-4f52-bf2f-750b82b6faaa","Type":"ContainerStarted","Data":"42786c7dee53e7ae70a0278eff2e12c6af1e58627f66685890b956075733ae32"} Dec 01 05:33:41 crc kubenswrapper[4880]: I1201 05:33:41.243217 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8n77" event={"ID":"b61192ca-d443-4f52-bf2f-750b82b6faaa","Type":"ContainerStarted","Data":"69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a"} Dec 01 05:33:42 crc kubenswrapper[4880]: I1201 05:33:42.252152 4880 generic.go:334] "Generic (PLEG): container finished" podID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerID="69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a" exitCode=0 Dec 01 05:33:42 crc kubenswrapper[4880]: I1201 05:33:42.252240 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8n77" event={"ID":"b61192ca-d443-4f52-bf2f-750b82b6faaa","Type":"ContainerDied","Data":"69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a"} Dec 01 05:33:43 crc kubenswrapper[4880]: I1201 05:33:43.261292 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8n77" event={"ID":"b61192ca-d443-4f52-bf2f-750b82b6faaa","Type":"ContainerStarted","Data":"2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067"} Dec 01 05:33:48 crc kubenswrapper[4880]: I1201 05:33:48.942532 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:48 crc kubenswrapper[4880]: I1201 05:33:48.944267 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:48 crc kubenswrapper[4880]: I1201 05:33:48.988745 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:49 crc kubenswrapper[4880]: I1201 05:33:49.014897 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b8n77" podStartSLOduration=8.367778378 podStartE2EDuration="11.014865983s" podCreationTimestamp="2025-12-01 05:33:38 +0000 UTC" firstStartedPulling="2025-12-01 05:33:40.234117305 +0000 UTC m=+9449.745371677" lastFinishedPulling="2025-12-01 05:33:42.88120491 +0000 UTC m=+9452.392459282" observedRunningTime="2025-12-01 05:33:43.377853487 +0000 UTC m=+9452.889107849" watchObservedRunningTime="2025-12-01 05:33:49.014865983 +0000 UTC m=+9458.526120355" Dec 01 05:33:49 crc kubenswrapper[4880]: I1201 05:33:49.390615 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:49 crc kubenswrapper[4880]: I1201 05:33:49.445374 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b8n77"] Dec 01 05:33:51 crc kubenswrapper[4880]: I1201 05:33:51.341596 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b8n77" podUID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerName="registry-server" containerID="cri-o://2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067" gracePeriod=2 Dec 01 05:33:51 crc kubenswrapper[4880]: I1201 05:33:51.932170 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.087288 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfgss\" (UniqueName: \"kubernetes.io/projected/b61192ca-d443-4f52-bf2f-750b82b6faaa-kube-api-access-mfgss\") pod \"b61192ca-d443-4f52-bf2f-750b82b6faaa\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.087334 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-catalog-content\") pod \"b61192ca-d443-4f52-bf2f-750b82b6faaa\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.087405 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-utilities\") pod \"b61192ca-d443-4f52-bf2f-750b82b6faaa\" (UID: \"b61192ca-d443-4f52-bf2f-750b82b6faaa\") " Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.089116 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-utilities" (OuterVolumeSpecName: "utilities") pod "b61192ca-d443-4f52-bf2f-750b82b6faaa" (UID: "b61192ca-d443-4f52-bf2f-750b82b6faaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.098936 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61192ca-d443-4f52-bf2f-750b82b6faaa-kube-api-access-mfgss" (OuterVolumeSpecName: "kube-api-access-mfgss") pod "b61192ca-d443-4f52-bf2f-750b82b6faaa" (UID: "b61192ca-d443-4f52-bf2f-750b82b6faaa"). InnerVolumeSpecName "kube-api-access-mfgss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.143334 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b61192ca-d443-4f52-bf2f-750b82b6faaa" (UID: "b61192ca-d443-4f52-bf2f-750b82b6faaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.190102 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.190142 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfgss\" (UniqueName: \"kubernetes.io/projected/b61192ca-d443-4f52-bf2f-750b82b6faaa-kube-api-access-mfgss\") on node \"crc\" DevicePath \"\"" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.190177 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b61192ca-d443-4f52-bf2f-750b82b6faaa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.353120 4880 generic.go:334] "Generic (PLEG): container finished" podID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerID="2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067" exitCode=0 Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.353181 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8n77" event={"ID":"b61192ca-d443-4f52-bf2f-750b82b6faaa","Type":"ContainerDied","Data":"2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067"} Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.353454 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8n77" event={"ID":"b61192ca-d443-4f52-bf2f-750b82b6faaa","Type":"ContainerDied","Data":"42786c7dee53e7ae70a0278eff2e12c6af1e58627f66685890b956075733ae32"} Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.353477 4880 scope.go:117] "RemoveContainer" containerID="2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.353209 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8n77" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.382170 4880 scope.go:117] "RemoveContainer" containerID="69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.395827 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b8n77"] Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.403846 4880 scope.go:117] "RemoveContainer" containerID="baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.411047 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b8n77"] Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.441689 4880 scope.go:117] "RemoveContainer" containerID="2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067" Dec 01 05:33:52 crc kubenswrapper[4880]: E1201 05:33:52.442115 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067\": container with ID starting with 2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067 not found: ID does not exist" containerID="2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.442156 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067"} err="failed to get container status \"2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067\": rpc error: code = NotFound desc = could not find container \"2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067\": container with ID starting with 2e49619de0bab2b17ad77c960884eb9f9f320d50693adc1877f40706b664a067 not found: ID does not exist" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.442183 4880 scope.go:117] "RemoveContainer" containerID="69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a" Dec 01 05:33:52 crc kubenswrapper[4880]: E1201 05:33:52.442576 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a\": container with ID starting with 69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a not found: ID does not exist" containerID="69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.442609 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a"} err="failed to get container status \"69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a\": rpc error: code = NotFound desc = could not find container \"69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a\": container with ID starting with 69b21e628af700bfe4f7278716ddaeb672cfcbc2454880a0f7184f169838f46a not found: ID does not exist" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.442626 4880 scope.go:117] "RemoveContainer" containerID="baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052" Dec 01 05:33:52 crc kubenswrapper[4880]: E1201 05:33:52.442858 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052\": container with ID starting with baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052 not found: ID does not exist" containerID="baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.442903 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052"} err="failed to get container status \"baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052\": rpc error: code = NotFound desc = could not find container \"baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052\": container with ID starting with baa3db16e7a39320f144ba0dbf14450fe8ab3e626d7f5cb3d7def2394fa1d052 not found: ID does not exist" Dec 01 05:33:52 crc kubenswrapper[4880]: I1201 05:33:52.793656 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b61192ca-d443-4f52-bf2f-750b82b6faaa" path="/var/lib/kubelet/pods/b61192ca-d443-4f52-bf2f-750b82b6faaa/volumes" Dec 01 05:35:17 crc kubenswrapper[4880]: I1201 05:35:17.369111 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:35:17 crc kubenswrapper[4880]: I1201 05:35:17.369985 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:35:47 crc kubenswrapper[4880]: I1201 05:35:47.368991 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:35:47 crc kubenswrapper[4880]: I1201 05:35:47.369418 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:36:17 crc kubenswrapper[4880]: I1201 05:36:17.369399 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:36:17 crc kubenswrapper[4880]: I1201 05:36:17.370040 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:36:17 crc kubenswrapper[4880]: I1201 05:36:17.370109 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 05:36:17 crc kubenswrapper[4880]: I1201 05:36:17.371079 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f2c9655492bc88eae059962d625b57d15cad37507a21a9b7a682908476554a1"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 05:36:17 crc kubenswrapper[4880]: I1201 05:36:17.371141 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://4f2c9655492bc88eae059962d625b57d15cad37507a21a9b7a682908476554a1" gracePeriod=600 Dec 01 05:36:17 crc kubenswrapper[4880]: I1201 05:36:17.819376 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="4f2c9655492bc88eae059962d625b57d15cad37507a21a9b7a682908476554a1" exitCode=0 Dec 01 05:36:17 crc kubenswrapper[4880]: I1201 05:36:17.819455 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"4f2c9655492bc88eae059962d625b57d15cad37507a21a9b7a682908476554a1"} Dec 01 05:36:17 crc kubenswrapper[4880]: I1201 05:36:17.819783 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54"} Dec 01 05:36:17 crc kubenswrapper[4880]: I1201 05:36:17.819817 4880 scope.go:117] "RemoveContainer" containerID="79ba3d89160d19243207d3cfe870310a9fc5687c594f1ff92bbf0793d6671d0e" Dec 01 05:38:17 crc kubenswrapper[4880]: I1201 05:38:17.368837 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:38:17 crc kubenswrapper[4880]: I1201 05:38:17.369420 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:38:47 crc kubenswrapper[4880]: I1201 05:38:47.368863 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:38:47 crc kubenswrapper[4880]: I1201 05:38:47.369478 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:39:17 crc kubenswrapper[4880]: I1201 05:39:17.368921 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:39:17 crc kubenswrapper[4880]: I1201 05:39:17.369560 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 05:39:17 crc kubenswrapper[4880]: I1201 05:39:17.369634 4880 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" Dec 01 05:39:17 crc kubenswrapper[4880]: I1201 05:39:17.370685 4880 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54"} pod="openshift-machine-config-operator/machine-config-daemon-g45lh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 05:39:17 crc kubenswrapper[4880]: I1201 05:39:17.370784 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" containerID="cri-o://4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" gracePeriod=600 Dec 01 05:39:17 crc kubenswrapper[4880]: E1201 05:39:17.501333 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:39:17 crc kubenswrapper[4880]: I1201 05:39:17.704148 4880 generic.go:334] "Generic (PLEG): container finished" podID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" exitCode=0 Dec 01 05:39:17 crc kubenswrapper[4880]: I1201 05:39:17.704196 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerDied","Data":"4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54"} Dec 01 05:39:17 crc kubenswrapper[4880]: I1201 05:39:17.704233 4880 scope.go:117] "RemoveContainer" containerID="4f2c9655492bc88eae059962d625b57d15cad37507a21a9b7a682908476554a1" Dec 01 05:39:17 crc kubenswrapper[4880]: I1201 05:39:17.704922 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:39:17 crc kubenswrapper[4880]: E1201 05:39:17.705244 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:39:31 crc kubenswrapper[4880]: I1201 05:39:31.790433 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:39:31 crc kubenswrapper[4880]: E1201 05:39:31.791247 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:39:45 crc kubenswrapper[4880]: I1201 05:39:45.784375 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:39:45 crc kubenswrapper[4880]: E1201 05:39:45.785641 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:39:56 crc kubenswrapper[4880]: I1201 05:39:56.784658 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:39:56 crc kubenswrapper[4880]: E1201 05:39:56.785619 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:40:10 crc kubenswrapper[4880]: I1201 05:40:10.785416 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:40:10 crc kubenswrapper[4880]: E1201 05:40:10.786106 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:40:25 crc kubenswrapper[4880]: I1201 05:40:25.785028 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:40:25 crc kubenswrapper[4880]: E1201 05:40:25.786348 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:40:36 crc kubenswrapper[4880]: I1201 05:40:36.785505 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:40:36 crc kubenswrapper[4880]: E1201 05:40:36.786811 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:40:51 crc kubenswrapper[4880]: I1201 05:40:51.785258 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:40:51 crc kubenswrapper[4880]: E1201 05:40:51.786102 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:41:06 crc kubenswrapper[4880]: I1201 05:41:06.784918 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:41:06 crc kubenswrapper[4880]: E1201 05:41:06.786044 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:41:20 crc kubenswrapper[4880]: I1201 05:41:20.820969 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:41:20 crc kubenswrapper[4880]: E1201 05:41:20.828219 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:41:35 crc kubenswrapper[4880]: I1201 05:41:35.784466 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:41:35 crc kubenswrapper[4880]: E1201 05:41:35.785115 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:41:48 crc kubenswrapper[4880]: I1201 05:41:48.785042 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:41:48 crc kubenswrapper[4880]: E1201 05:41:48.785817 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:42:00 crc kubenswrapper[4880]: I1201 05:42:00.796550 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:42:00 crc kubenswrapper[4880]: E1201 05:42:00.797457 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:42:11 crc kubenswrapper[4880]: I1201 05:42:11.784162 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:42:11 crc kubenswrapper[4880]: E1201 05:42:11.784776 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.074813 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8gjjz"] Dec 01 05:42:12 crc kubenswrapper[4880]: E1201 05:42:12.075159 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerName="registry-server" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.075171 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerName="registry-server" Dec 01 05:42:12 crc kubenswrapper[4880]: E1201 05:42:12.075193 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerName="extract-content" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.075199 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerName="extract-content" Dec 01 05:42:12 crc kubenswrapper[4880]: E1201 05:42:12.075208 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerName="extract-utilities" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.075214 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerName="extract-utilities" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.075479 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="b61192ca-d443-4f52-bf2f-750b82b6faaa" containerName="registry-server" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.079247 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.097968 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gjjz"] Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.163437 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527mg\" (UniqueName: \"kubernetes.io/projected/9850d055-520d-48a4-8e9b-ace75a88012d-kube-api-access-527mg\") pod \"community-operators-8gjjz\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.163577 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-utilities\") pod \"community-operators-8gjjz\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.163657 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-catalog-content\") pod \"community-operators-8gjjz\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.264775 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-catalog-content\") pod \"community-operators-8gjjz\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.264851 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-527mg\" (UniqueName: \"kubernetes.io/projected/9850d055-520d-48a4-8e9b-ace75a88012d-kube-api-access-527mg\") pod \"community-operators-8gjjz\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.264944 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-utilities\") pod \"community-operators-8gjjz\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.265408 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-utilities\") pod \"community-operators-8gjjz\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.265600 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-catalog-content\") pod \"community-operators-8gjjz\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.291479 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-527mg\" (UniqueName: \"kubernetes.io/projected/9850d055-520d-48a4-8e9b-ace75a88012d-kube-api-access-527mg\") pod \"community-operators-8gjjz\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.399984 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.686912 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tt74s"] Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.688972 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.701462 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tt74s"] Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.775292 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nqmh\" (UniqueName: \"kubernetes.io/projected/ad1f2925-341b-4fdc-961b-9b99fae81c6b-kube-api-access-8nqmh\") pod \"redhat-marketplace-tt74s\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.775375 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-utilities\") pod \"redhat-marketplace-tt74s\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.775786 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-catalog-content\") pod \"redhat-marketplace-tt74s\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.877574 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-utilities\") pod \"redhat-marketplace-tt74s\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.878043 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-utilities\") pod \"redhat-marketplace-tt74s\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.878436 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-catalog-content\") pod \"redhat-marketplace-tt74s\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.878747 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-catalog-content\") pod \"redhat-marketplace-tt74s\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.878908 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nqmh\" (UniqueName: \"kubernetes.io/projected/ad1f2925-341b-4fdc-961b-9b99fae81c6b-kube-api-access-8nqmh\") pod \"redhat-marketplace-tt74s\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.894774 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nqmh\" (UniqueName: \"kubernetes.io/projected/ad1f2925-341b-4fdc-961b-9b99fae81c6b-kube-api-access-8nqmh\") pod \"redhat-marketplace-tt74s\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:12 crc kubenswrapper[4880]: I1201 05:42:12.919487 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gjjz"] Dec 01 05:42:12 crc kubenswrapper[4880]: W1201 05:42:12.948657 4880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9850d055_520d_48a4_8e9b_ace75a88012d.slice/crio-526cc06e98afd286d49023ab628460773a0e1d3c2510452d64088fcfe75216e1 WatchSource:0}: Error finding container 526cc06e98afd286d49023ab628460773a0e1d3c2510452d64088fcfe75216e1: Status 404 returned error can't find the container with id 526cc06e98afd286d49023ab628460773a0e1d3c2510452d64088fcfe75216e1 Dec 01 05:42:13 crc kubenswrapper[4880]: I1201 05:42:13.011755 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:13 crc kubenswrapper[4880]: I1201 05:42:13.014490 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gjjz" event={"ID":"9850d055-520d-48a4-8e9b-ace75a88012d","Type":"ContainerStarted","Data":"526cc06e98afd286d49023ab628460773a0e1d3c2510452d64088fcfe75216e1"} Dec 01 05:42:13 crc kubenswrapper[4880]: I1201 05:42:13.557013 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tt74s"] Dec 01 05:42:14 crc kubenswrapper[4880]: I1201 05:42:14.023264 4880 generic.go:334] "Generic (PLEG): container finished" podID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerID="dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d" exitCode=0 Dec 01 05:42:14 crc kubenswrapper[4880]: I1201 05:42:14.023360 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tt74s" event={"ID":"ad1f2925-341b-4fdc-961b-9b99fae81c6b","Type":"ContainerDied","Data":"dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d"} Dec 01 05:42:14 crc kubenswrapper[4880]: I1201 05:42:14.023658 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tt74s" event={"ID":"ad1f2925-341b-4fdc-961b-9b99fae81c6b","Type":"ContainerStarted","Data":"abf17e94e4e4bdbfd99d996e1ce5b44d0cb774fa7436ceb6ec9264d71136dd0c"} Dec 01 05:42:14 crc kubenswrapper[4880]: I1201 05:42:14.025739 4880 generic.go:334] "Generic (PLEG): container finished" podID="9850d055-520d-48a4-8e9b-ace75a88012d" containerID="b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c" exitCode=0 Dec 01 05:42:14 crc kubenswrapper[4880]: I1201 05:42:14.025778 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gjjz" event={"ID":"9850d055-520d-48a4-8e9b-ace75a88012d","Type":"ContainerDied","Data":"b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c"} Dec 01 05:42:14 crc kubenswrapper[4880]: I1201 05:42:14.025797 4880 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 05:42:16 crc kubenswrapper[4880]: I1201 05:42:16.043512 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tt74s" event={"ID":"ad1f2925-341b-4fdc-961b-9b99fae81c6b","Type":"ContainerStarted","Data":"aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273"} Dec 01 05:42:16 crc kubenswrapper[4880]: I1201 05:42:16.046707 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gjjz" event={"ID":"9850d055-520d-48a4-8e9b-ace75a88012d","Type":"ContainerStarted","Data":"4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089"} Dec 01 05:42:17 crc kubenswrapper[4880]: I1201 05:42:17.056918 4880 generic.go:334] "Generic (PLEG): container finished" podID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerID="aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273" exitCode=0 Dec 01 05:42:17 crc kubenswrapper[4880]: I1201 05:42:17.057082 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tt74s" event={"ID":"ad1f2925-341b-4fdc-961b-9b99fae81c6b","Type":"ContainerDied","Data":"aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273"} Dec 01 05:42:17 crc kubenswrapper[4880]: I1201 05:42:17.059460 4880 generic.go:334] "Generic (PLEG): container finished" podID="9850d055-520d-48a4-8e9b-ace75a88012d" containerID="4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089" exitCode=0 Dec 01 05:42:17 crc kubenswrapper[4880]: I1201 05:42:17.059487 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gjjz" event={"ID":"9850d055-520d-48a4-8e9b-ace75a88012d","Type":"ContainerDied","Data":"4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089"} Dec 01 05:42:18 crc kubenswrapper[4880]: I1201 05:42:18.070415 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tt74s" event={"ID":"ad1f2925-341b-4fdc-961b-9b99fae81c6b","Type":"ContainerStarted","Data":"33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f"} Dec 01 05:42:18 crc kubenswrapper[4880]: I1201 05:42:18.073594 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gjjz" event={"ID":"9850d055-520d-48a4-8e9b-ace75a88012d","Type":"ContainerStarted","Data":"5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d"} Dec 01 05:42:18 crc kubenswrapper[4880]: I1201 05:42:18.092071 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tt74s" podStartSLOduration=2.515329438 podStartE2EDuration="6.092057581s" podCreationTimestamp="2025-12-01 05:42:12 +0000 UTC" firstStartedPulling="2025-12-01 05:42:14.02513912 +0000 UTC m=+9963.536393492" lastFinishedPulling="2025-12-01 05:42:17.601867223 +0000 UTC m=+9967.113121635" observedRunningTime="2025-12-01 05:42:18.08830765 +0000 UTC m=+9967.599562022" watchObservedRunningTime="2025-12-01 05:42:18.092057581 +0000 UTC m=+9967.603311953" Dec 01 05:42:18 crc kubenswrapper[4880]: I1201 05:42:18.108765 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8gjjz" podStartSLOduration=2.671242232 podStartE2EDuration="6.108745379s" podCreationTimestamp="2025-12-01 05:42:12 +0000 UTC" firstStartedPulling="2025-12-01 05:42:14.030478181 +0000 UTC m=+9963.541732553" lastFinishedPulling="2025-12-01 05:42:17.467981288 +0000 UTC m=+9966.979235700" observedRunningTime="2025-12-01 05:42:18.106257038 +0000 UTC m=+9967.617511410" watchObservedRunningTime="2025-12-01 05:42:18.108745379 +0000 UTC m=+9967.619999751" Dec 01 05:42:22 crc kubenswrapper[4880]: I1201 05:42:22.401412 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:22 crc kubenswrapper[4880]: I1201 05:42:22.403096 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:22 crc kubenswrapper[4880]: I1201 05:42:22.463981 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:23 crc kubenswrapper[4880]: I1201 05:42:23.013304 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:23 crc kubenswrapper[4880]: I1201 05:42:23.013361 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:23 crc kubenswrapper[4880]: I1201 05:42:23.068267 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:23 crc kubenswrapper[4880]: I1201 05:42:23.172954 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:23 crc kubenswrapper[4880]: I1201 05:42:23.175037 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:24 crc kubenswrapper[4880]: I1201 05:42:24.866821 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tt74s"] Dec 01 05:42:25 crc kubenswrapper[4880]: I1201 05:42:25.136093 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tt74s" podUID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerName="registry-server" containerID="cri-o://33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f" gracePeriod=2 Dec 01 05:42:25 crc kubenswrapper[4880]: I1201 05:42:25.468345 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gjjz"] Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.724733 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.840463 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-catalog-content\") pod \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.840606 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nqmh\" (UniqueName: \"kubernetes.io/projected/ad1f2925-341b-4fdc-961b-9b99fae81c6b-kube-api-access-8nqmh\") pod \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.840673 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-utilities\") pod \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\" (UID: \"ad1f2925-341b-4fdc-961b-9b99fae81c6b\") " Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.846723 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-utilities" (OuterVolumeSpecName: "utilities") pod "ad1f2925-341b-4fdc-961b-9b99fae81c6b" (UID: "ad1f2925-341b-4fdc-961b-9b99fae81c6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.866664 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad1f2925-341b-4fdc-961b-9b99fae81c6b" (UID: "ad1f2925-341b-4fdc-961b-9b99fae81c6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.868042 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad1f2925-341b-4fdc-961b-9b99fae81c6b-kube-api-access-8nqmh" (OuterVolumeSpecName: "kube-api-access-8nqmh") pod "ad1f2925-341b-4fdc-961b-9b99fae81c6b" (UID: "ad1f2925-341b-4fdc-961b-9b99fae81c6b"). InnerVolumeSpecName "kube-api-access-8nqmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.943708 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nqmh\" (UniqueName: \"kubernetes.io/projected/ad1f2925-341b-4fdc-961b-9b99fae81c6b-kube-api-access-8nqmh\") on node \"crc\" DevicePath \"\"" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.943746 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:25.943763 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1f2925-341b-4fdc-961b-9b99fae81c6b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.149171 4880 generic.go:334] "Generic (PLEG): container finished" podID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerID="33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f" exitCode=0 Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.149504 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8gjjz" podUID="9850d055-520d-48a4-8e9b-ace75a88012d" containerName="registry-server" containerID="cri-o://5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d" gracePeriod=2 Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.150042 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tt74s" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.150107 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tt74s" event={"ID":"ad1f2925-341b-4fdc-961b-9b99fae81c6b","Type":"ContainerDied","Data":"33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f"} Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.150152 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tt74s" event={"ID":"ad1f2925-341b-4fdc-961b-9b99fae81c6b","Type":"ContainerDied","Data":"abf17e94e4e4bdbfd99d996e1ce5b44d0cb774fa7436ceb6ec9264d71136dd0c"} Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.150180 4880 scope.go:117] "RemoveContainer" containerID="33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.190635 4880 scope.go:117] "RemoveContainer" containerID="aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.211849 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tt74s"] Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.220135 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tt74s"] Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.521145 4880 scope.go:117] "RemoveContainer" containerID="dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.754234 4880 scope.go:117] "RemoveContainer" containerID="33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f" Dec 01 05:42:26 crc kubenswrapper[4880]: E1201 05:42:26.760228 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f\": container with ID starting with 33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f not found: ID does not exist" containerID="33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.760274 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f"} err="failed to get container status \"33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f\": rpc error: code = NotFound desc = could not find container \"33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f\": container with ID starting with 33f525348822a3ba824ea85f5b49795ee36fe7478ac3a61b41462599fe8ccc9f not found: ID does not exist" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.760300 4880 scope.go:117] "RemoveContainer" containerID="aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273" Dec 01 05:42:26 crc kubenswrapper[4880]: E1201 05:42:26.761535 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273\": container with ID starting with aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273 not found: ID does not exist" containerID="aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.761586 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273"} err="failed to get container status \"aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273\": rpc error: code = NotFound desc = could not find container \"aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273\": container with ID starting with aa9f6751239e25993cf0b7d90bddd42b19d0e33f688de467c4cd38d4e0b0e273 not found: ID does not exist" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.761633 4880 scope.go:117] "RemoveContainer" containerID="dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d" Dec 01 05:42:26 crc kubenswrapper[4880]: E1201 05:42:26.762125 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d\": container with ID starting with dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d not found: ID does not exist" containerID="dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.762185 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d"} err="failed to get container status \"dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d\": rpc error: code = NotFound desc = could not find container \"dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d\": container with ID starting with dbbf624a6572fdb71a95ab53fe24897597e6cbf359492c620e9c1fd7928d718d not found: ID does not exist" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.785542 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:42:26 crc kubenswrapper[4880]: E1201 05:42:26.785945 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.808091 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" path="/var/lib/kubelet/pods/ad1f2925-341b-4fdc-961b-9b99fae81c6b/volumes" Dec 01 05:42:26 crc kubenswrapper[4880]: I1201 05:42:26.961355 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.063170 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-utilities\") pod \"9850d055-520d-48a4-8e9b-ace75a88012d\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.064233 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-utilities" (OuterVolumeSpecName: "utilities") pod "9850d055-520d-48a4-8e9b-ace75a88012d" (UID: "9850d055-520d-48a4-8e9b-ace75a88012d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.064461 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-catalog-content\") pod \"9850d055-520d-48a4-8e9b-ace75a88012d\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.064663 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-527mg\" (UniqueName: \"kubernetes.io/projected/9850d055-520d-48a4-8e9b-ace75a88012d-kube-api-access-527mg\") pod \"9850d055-520d-48a4-8e9b-ace75a88012d\" (UID: \"9850d055-520d-48a4-8e9b-ace75a88012d\") " Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.065313 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.071028 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9850d055-520d-48a4-8e9b-ace75a88012d-kube-api-access-527mg" (OuterVolumeSpecName: "kube-api-access-527mg") pod "9850d055-520d-48a4-8e9b-ace75a88012d" (UID: "9850d055-520d-48a4-8e9b-ace75a88012d"). InnerVolumeSpecName "kube-api-access-527mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.153853 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9850d055-520d-48a4-8e9b-ace75a88012d" (UID: "9850d055-520d-48a4-8e9b-ace75a88012d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.171035 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-527mg\" (UniqueName: \"kubernetes.io/projected/9850d055-520d-48a4-8e9b-ace75a88012d-kube-api-access-527mg\") on node \"crc\" DevicePath \"\"" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.171275 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9850d055-520d-48a4-8e9b-ace75a88012d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.180317 4880 generic.go:334] "Generic (PLEG): container finished" podID="9850d055-520d-48a4-8e9b-ace75a88012d" containerID="5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d" exitCode=0 Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.180358 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gjjz" event={"ID":"9850d055-520d-48a4-8e9b-ace75a88012d","Type":"ContainerDied","Data":"5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d"} Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.180379 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gjjz" event={"ID":"9850d055-520d-48a4-8e9b-ace75a88012d","Type":"ContainerDied","Data":"526cc06e98afd286d49023ab628460773a0e1d3c2510452d64088fcfe75216e1"} Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.180394 4880 scope.go:117] "RemoveContainer" containerID="5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.180493 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gjjz" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.251328 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gjjz"] Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.260585 4880 scope.go:117] "RemoveContainer" containerID="4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.263363 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8gjjz"] Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.284952 4880 scope.go:117] "RemoveContainer" containerID="b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.301564 4880 scope.go:117] "RemoveContainer" containerID="5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d" Dec 01 05:42:27 crc kubenswrapper[4880]: E1201 05:42:27.302046 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d\": container with ID starting with 5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d not found: ID does not exist" containerID="5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.302088 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d"} err="failed to get container status \"5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d\": rpc error: code = NotFound desc = could not find container \"5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d\": container with ID starting with 5120d370c931b689449a7ab96f9f540af4b2a2c9b79608ab57d6cd14a8e6a41d not found: ID does not exist" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.302114 4880 scope.go:117] "RemoveContainer" containerID="4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089" Dec 01 05:42:27 crc kubenswrapper[4880]: E1201 05:42:27.302584 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089\": container with ID starting with 4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089 not found: ID does not exist" containerID="4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.302726 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089"} err="failed to get container status \"4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089\": rpc error: code = NotFound desc = could not find container \"4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089\": container with ID starting with 4756a12e4d215248e3bb2ca58d14632427631e81ec08e3b37445ee23ebb38089 not found: ID does not exist" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.302797 4880 scope.go:117] "RemoveContainer" containerID="b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c" Dec 01 05:42:27 crc kubenswrapper[4880]: E1201 05:42:27.303225 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c\": container with ID starting with b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c not found: ID does not exist" containerID="b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c" Dec 01 05:42:27 crc kubenswrapper[4880]: I1201 05:42:27.303257 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c"} err="failed to get container status \"b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c\": rpc error: code = NotFound desc = could not find container \"b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c\": container with ID starting with b2273fa8a25d1b592f30aa5b7065dbd6883922a6234e3ebf7ca284033186427c not found: ID does not exist" Dec 01 05:42:28 crc kubenswrapper[4880]: I1201 05:42:28.796131 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9850d055-520d-48a4-8e9b-ace75a88012d" path="/var/lib/kubelet/pods/9850d055-520d-48a4-8e9b-ace75a88012d/volumes" Dec 01 05:42:37 crc kubenswrapper[4880]: I1201 05:42:37.783720 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:42:37 crc kubenswrapper[4880]: E1201 05:42:37.784491 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:42:52 crc kubenswrapper[4880]: I1201 05:42:52.785143 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:42:52 crc kubenswrapper[4880]: E1201 05:42:52.785908 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:43:06 crc kubenswrapper[4880]: I1201 05:43:06.784110 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:43:06 crc kubenswrapper[4880]: E1201 05:43:06.785029 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:43:19 crc kubenswrapper[4880]: I1201 05:43:19.784031 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:43:19 crc kubenswrapper[4880]: E1201 05:43:19.784716 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.986193 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hcqx9"] Dec 01 05:43:26 crc kubenswrapper[4880]: E1201 05:43:26.987970 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerName="registry-server" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.988041 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerName="registry-server" Dec 01 05:43:26 crc kubenswrapper[4880]: E1201 05:43:26.988115 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9850d055-520d-48a4-8e9b-ace75a88012d" containerName="registry-server" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.988163 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9850d055-520d-48a4-8e9b-ace75a88012d" containerName="registry-server" Dec 01 05:43:26 crc kubenswrapper[4880]: E1201 05:43:26.988215 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerName="extract-content" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.988260 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerName="extract-content" Dec 01 05:43:26 crc kubenswrapper[4880]: E1201 05:43:26.988327 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9850d055-520d-48a4-8e9b-ace75a88012d" containerName="extract-content" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.988382 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9850d055-520d-48a4-8e9b-ace75a88012d" containerName="extract-content" Dec 01 05:43:26 crc kubenswrapper[4880]: E1201 05:43:26.988444 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9850d055-520d-48a4-8e9b-ace75a88012d" containerName="extract-utilities" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.988494 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="9850d055-520d-48a4-8e9b-ace75a88012d" containerName="extract-utilities" Dec 01 05:43:26 crc kubenswrapper[4880]: E1201 05:43:26.988544 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerName="extract-utilities" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.988589 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerName="extract-utilities" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.988842 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad1f2925-341b-4fdc-961b-9b99fae81c6b" containerName="registry-server" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.988947 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="9850d055-520d-48a4-8e9b-ace75a88012d" containerName="registry-server" Dec 01 05:43:26 crc kubenswrapper[4880]: I1201 05:43:26.990298 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.021175 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-catalog-content\") pod \"redhat-operators-hcqx9\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.021549 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-utilities\") pod \"redhat-operators-hcqx9\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.021665 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-785k6\" (UniqueName: \"kubernetes.io/projected/ce055b3e-aeae-43f8-a3b0-21902f10a92f-kube-api-access-785k6\") pod \"redhat-operators-hcqx9\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.031948 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcqx9"] Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.123649 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-utilities\") pod \"redhat-operators-hcqx9\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.124276 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-utilities\") pod \"redhat-operators-hcqx9\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.124435 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-785k6\" (UniqueName: \"kubernetes.io/projected/ce055b3e-aeae-43f8-a3b0-21902f10a92f-kube-api-access-785k6\") pod \"redhat-operators-hcqx9\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.124708 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-catalog-content\") pod \"redhat-operators-hcqx9\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.125398 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-catalog-content\") pod \"redhat-operators-hcqx9\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.155468 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-785k6\" (UniqueName: \"kubernetes.io/projected/ce055b3e-aeae-43f8-a3b0-21902f10a92f-kube-api-access-785k6\") pod \"redhat-operators-hcqx9\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.313251 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:27 crc kubenswrapper[4880]: I1201 05:43:27.869507 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcqx9"] Dec 01 05:43:28 crc kubenswrapper[4880]: I1201 05:43:28.848683 4880 generic.go:334] "Generic (PLEG): container finished" podID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerID="9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3" exitCode=0 Dec 01 05:43:28 crc kubenswrapper[4880]: I1201 05:43:28.848794 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqx9" event={"ID":"ce055b3e-aeae-43f8-a3b0-21902f10a92f","Type":"ContainerDied","Data":"9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3"} Dec 01 05:43:28 crc kubenswrapper[4880]: I1201 05:43:28.849418 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqx9" event={"ID":"ce055b3e-aeae-43f8-a3b0-21902f10a92f","Type":"ContainerStarted","Data":"ae961f5c31b6b364ed150df87feec5c253360422c74359a2ffbc55c5739017b5"} Dec 01 05:43:30 crc kubenswrapper[4880]: I1201 05:43:30.872149 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqx9" event={"ID":"ce055b3e-aeae-43f8-a3b0-21902f10a92f","Type":"ContainerStarted","Data":"5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32"} Dec 01 05:43:32 crc kubenswrapper[4880]: I1201 05:43:32.921845 4880 generic.go:334] "Generic (PLEG): container finished" podID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerID="5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32" exitCode=0 Dec 01 05:43:32 crc kubenswrapper[4880]: I1201 05:43:32.921923 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqx9" event={"ID":"ce055b3e-aeae-43f8-a3b0-21902f10a92f","Type":"ContainerDied","Data":"5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32"} Dec 01 05:43:33 crc kubenswrapper[4880]: I1201 05:43:33.783924 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:43:33 crc kubenswrapper[4880]: E1201 05:43:33.784533 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:43:33 crc kubenswrapper[4880]: I1201 05:43:33.933052 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqx9" event={"ID":"ce055b3e-aeae-43f8-a3b0-21902f10a92f","Type":"ContainerStarted","Data":"eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa"} Dec 01 05:43:33 crc kubenswrapper[4880]: I1201 05:43:33.955460 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hcqx9" podStartSLOduration=3.414234161 podStartE2EDuration="7.951432145s" podCreationTimestamp="2025-12-01 05:43:26 +0000 UTC" firstStartedPulling="2025-12-01 05:43:28.852022159 +0000 UTC m=+10038.363276541" lastFinishedPulling="2025-12-01 05:43:33.389220153 +0000 UTC m=+10042.900474525" observedRunningTime="2025-12-01 05:43:33.949679632 +0000 UTC m=+10043.460934014" watchObservedRunningTime="2025-12-01 05:43:33.951432145 +0000 UTC m=+10043.462686567" Dec 01 05:43:37 crc kubenswrapper[4880]: I1201 05:43:37.314028 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:37 crc kubenswrapper[4880]: I1201 05:43:37.314626 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:38 crc kubenswrapper[4880]: I1201 05:43:38.398747 4880 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hcqx9" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerName="registry-server" probeResult="failure" output=< Dec 01 05:43:38 crc kubenswrapper[4880]: timeout: failed to connect service ":50051" within 1s Dec 01 05:43:38 crc kubenswrapper[4880]: > Dec 01 05:43:46 crc kubenswrapper[4880]: I1201 05:43:46.783729 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:43:46 crc kubenswrapper[4880]: E1201 05:43:46.784488 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:43:47 crc kubenswrapper[4880]: I1201 05:43:47.393947 4880 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:47 crc kubenswrapper[4880]: I1201 05:43:47.449701 4880 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:47 crc kubenswrapper[4880]: I1201 05:43:47.642308 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcqx9"] Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.061211 4880 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hcqx9" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerName="registry-server" containerID="cri-o://eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa" gracePeriod=2 Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.661453 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.781072 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-785k6\" (UniqueName: \"kubernetes.io/projected/ce055b3e-aeae-43f8-a3b0-21902f10a92f-kube-api-access-785k6\") pod \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.781151 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-catalog-content\") pod \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.781272 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-utilities\") pod \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\" (UID: \"ce055b3e-aeae-43f8-a3b0-21902f10a92f\") " Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.784081 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-utilities" (OuterVolumeSpecName: "utilities") pod "ce055b3e-aeae-43f8-a3b0-21902f10a92f" (UID: "ce055b3e-aeae-43f8-a3b0-21902f10a92f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.796162 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce055b3e-aeae-43f8-a3b0-21902f10a92f-kube-api-access-785k6" (OuterVolumeSpecName: "kube-api-access-785k6") pod "ce055b3e-aeae-43f8-a3b0-21902f10a92f" (UID: "ce055b3e-aeae-43f8-a3b0-21902f10a92f"). InnerVolumeSpecName "kube-api-access-785k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.883949 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-785k6\" (UniqueName: \"kubernetes.io/projected/ce055b3e-aeae-43f8-a3b0-21902f10a92f-kube-api-access-785k6\") on node \"crc\" DevicePath \"\"" Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.883990 4880 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.890143 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce055b3e-aeae-43f8-a3b0-21902f10a92f" (UID: "ce055b3e-aeae-43f8-a3b0-21902f10a92f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:43:49 crc kubenswrapper[4880]: I1201 05:43:49.986272 4880 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce055b3e-aeae-43f8-a3b0-21902f10a92f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.078446 4880 generic.go:334] "Generic (PLEG): container finished" podID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerID="eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa" exitCode=0 Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.078488 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqx9" event={"ID":"ce055b3e-aeae-43f8-a3b0-21902f10a92f","Type":"ContainerDied","Data":"eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa"} Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.078521 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcqx9" event={"ID":"ce055b3e-aeae-43f8-a3b0-21902f10a92f","Type":"ContainerDied","Data":"ae961f5c31b6b364ed150df87feec5c253360422c74359a2ffbc55c5739017b5"} Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.078540 4880 scope.go:117] "RemoveContainer" containerID="eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.078662 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcqx9" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.109521 4880 scope.go:117] "RemoveContainer" containerID="5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.133240 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcqx9"] Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.143958 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hcqx9"] Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.148214 4880 scope.go:117] "RemoveContainer" containerID="9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.222804 4880 scope.go:117] "RemoveContainer" containerID="eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa" Dec 01 05:43:50 crc kubenswrapper[4880]: E1201 05:43:50.223623 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa\": container with ID starting with eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa not found: ID does not exist" containerID="eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.223668 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa"} err="failed to get container status \"eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa\": rpc error: code = NotFound desc = could not find container \"eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa\": container with ID starting with eafa0b3d1a54aadbcc1ded96d34e924b949f9b4278267da69d6ec4ffe610cbfa not found: ID does not exist" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.223694 4880 scope.go:117] "RemoveContainer" containerID="5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32" Dec 01 05:43:50 crc kubenswrapper[4880]: E1201 05:43:50.223981 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32\": container with ID starting with 5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32 not found: ID does not exist" containerID="5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.224013 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32"} err="failed to get container status \"5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32\": rpc error: code = NotFound desc = could not find container \"5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32\": container with ID starting with 5dc19aca3b37d5d590d9a28021c9631d86d631280181d4a8875254e8f3989a32 not found: ID does not exist" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.224034 4880 scope.go:117] "RemoveContainer" containerID="9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3" Dec 01 05:43:50 crc kubenswrapper[4880]: E1201 05:43:50.224285 4880 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3\": container with ID starting with 9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3 not found: ID does not exist" containerID="9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.224308 4880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3"} err="failed to get container status \"9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3\": rpc error: code = NotFound desc = could not find container \"9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3\": container with ID starting with 9a1172d59f7e9d88a573c58650222e4c5e7c881f22090cb7c67c98936e8e2bc3 not found: ID does not exist" Dec 01 05:43:50 crc kubenswrapper[4880]: I1201 05:43:50.800658 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" path="/var/lib/kubelet/pods/ce055b3e-aeae-43f8-a3b0-21902f10a92f/volumes" Dec 01 05:43:58 crc kubenswrapper[4880]: I1201 05:43:58.784605 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:43:58 crc kubenswrapper[4880]: E1201 05:43:58.785905 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:44:09 crc kubenswrapper[4880]: I1201 05:44:09.791283 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:44:09 crc kubenswrapper[4880]: E1201 05:44:09.792587 4880 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-g45lh_openshift-machine-config-operator(057ec9cf-8406-4617-bda6-99517f6d2a41)\"" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" Dec 01 05:44:12 crc kubenswrapper[4880]: I1201 05:44:12.324644 4880 generic.go:334] "Generic (PLEG): container finished" podID="094b499c-7f84-4ecc-b2dd-9792ecdb54a4" containerID="28ae51c44cc948fa74c9b51048d573fbb8ba14b214a41ca5da59e26b73ec6bea" exitCode=0 Dec 01 05:44:12 crc kubenswrapper[4880]: I1201 05:44:12.324828 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"094b499c-7f84-4ecc-b2dd-9792ecdb54a4","Type":"ContainerDied","Data":"28ae51c44cc948fa74c9b51048d573fbb8ba14b214a41ca5da59e26b73ec6bea"} Dec 01 05:44:13 crc kubenswrapper[4880]: I1201 05:44:13.877364 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.004383 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-temporary\") pod \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.004521 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ca-certs\") pod \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.004551 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config-secret\") pod \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.004582 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ssh-key\") pod \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.004629 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-workdir\") pod \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.004665 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkmwq\" (UniqueName: \"kubernetes.io/projected/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-kube-api-access-nkmwq\") pod \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.004741 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.004807 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-config-data\") pod \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.004844 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config\") pod \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\" (UID: \"094b499c-7f84-4ecc-b2dd-9792ecdb54a4\") " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.005108 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "094b499c-7f84-4ecc-b2dd-9792ecdb54a4" (UID: "094b499c-7f84-4ecc-b2dd-9792ecdb54a4"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.006060 4880 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.006216 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-config-data" (OuterVolumeSpecName: "config-data") pod "094b499c-7f84-4ecc-b2dd-9792ecdb54a4" (UID: "094b499c-7f84-4ecc-b2dd-9792ecdb54a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.010686 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "094b499c-7f84-4ecc-b2dd-9792ecdb54a4" (UID: "094b499c-7f84-4ecc-b2dd-9792ecdb54a4"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.013243 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-kube-api-access-nkmwq" (OuterVolumeSpecName: "kube-api-access-nkmwq") pod "094b499c-7f84-4ecc-b2dd-9792ecdb54a4" (UID: "094b499c-7f84-4ecc-b2dd-9792ecdb54a4"). InnerVolumeSpecName "kube-api-access-nkmwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.016484 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "094b499c-7f84-4ecc-b2dd-9792ecdb54a4" (UID: "094b499c-7f84-4ecc-b2dd-9792ecdb54a4"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.040492 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "094b499c-7f84-4ecc-b2dd-9792ecdb54a4" (UID: "094b499c-7f84-4ecc-b2dd-9792ecdb54a4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.053728 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "094b499c-7f84-4ecc-b2dd-9792ecdb54a4" (UID: "094b499c-7f84-4ecc-b2dd-9792ecdb54a4"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.058325 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "094b499c-7f84-4ecc-b2dd-9792ecdb54a4" (UID: "094b499c-7f84-4ecc-b2dd-9792ecdb54a4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.066795 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "094b499c-7f84-4ecc-b2dd-9792ecdb54a4" (UID: "094b499c-7f84-4ecc-b2dd-9792ecdb54a4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.108282 4880 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.108330 4880 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.108353 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkmwq\" (UniqueName: \"kubernetes.io/projected/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-kube-api-access-nkmwq\") on node \"crc\" DevicePath \"\"" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.108711 4880 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.108753 4880 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.108772 4880 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.108788 4880 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.108804 4880 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/094b499c-7f84-4ecc-b2dd-9792ecdb54a4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.142120 4880 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.210489 4880 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.348362 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"094b499c-7f84-4ecc-b2dd-9792ecdb54a4","Type":"ContainerDied","Data":"b09e8cfaa4095d2440bdb1f3c83edbf523ff9ffc36e4ccd8419ba042828bd6b4"} Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.348404 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 01 05:44:14 crc kubenswrapper[4880]: I1201 05:44:14.348450 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b09e8cfaa4095d2440bdb1f3c83edbf523ff9ffc36e4ccd8419ba042828bd6b4" Dec 01 05:44:20 crc kubenswrapper[4880]: I1201 05:44:20.794851 4880 scope.go:117] "RemoveContainer" containerID="4e4ea682c4bc155ef94c002884ea876f3bbdaf088309bdd4355ab3019c260b54" Dec 01 05:44:21 crc kubenswrapper[4880]: I1201 05:44:21.445636 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" event={"ID":"057ec9cf-8406-4617-bda6-99517f6d2a41","Type":"ContainerStarted","Data":"c32bc8db3d7485482726dbf6772da62d58876dcef70d2e60aae222e3eb2cde56"} Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.947540 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 05:44:22 crc kubenswrapper[4880]: E1201 05:44:22.948751 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094b499c-7f84-4ecc-b2dd-9792ecdb54a4" containerName="tempest-tests-tempest-tests-runner" Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.948770 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="094b499c-7f84-4ecc-b2dd-9792ecdb54a4" containerName="tempest-tests-tempest-tests-runner" Dec 01 05:44:22 crc kubenswrapper[4880]: E1201 05:44:22.948839 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerName="extract-utilities" Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.948850 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerName="extract-utilities" Dec 01 05:44:22 crc kubenswrapper[4880]: E1201 05:44:22.948869 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerName="registry-server" Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.948899 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerName="registry-server" Dec 01 05:44:22 crc kubenswrapper[4880]: E1201 05:44:22.948918 4880 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerName="extract-content" Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.948928 4880 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerName="extract-content" Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.949200 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="094b499c-7f84-4ecc-b2dd-9792ecdb54a4" containerName="tempest-tests-tempest-tests-runner" Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.949228 4880 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce055b3e-aeae-43f8-a3b0-21902f10a92f" containerName="registry-server" Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.951783 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.957403 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-p26bl" Dec 01 05:44:22 crc kubenswrapper[4880]: I1201 05:44:22.961004 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 05:44:23 crc kubenswrapper[4880]: I1201 05:44:23.098981 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xfq5\" (UniqueName: \"kubernetes.io/projected/5e21e858-f191-4603-b0d4-d67f8f1408cd-kube-api-access-6xfq5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e21e858-f191-4603-b0d4-d67f8f1408cd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 05:44:23 crc kubenswrapper[4880]: I1201 05:44:23.099181 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e21e858-f191-4603-b0d4-d67f8f1408cd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 05:44:23 crc kubenswrapper[4880]: I1201 05:44:23.200424 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xfq5\" (UniqueName: \"kubernetes.io/projected/5e21e858-f191-4603-b0d4-d67f8f1408cd-kube-api-access-6xfq5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e21e858-f191-4603-b0d4-d67f8f1408cd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 05:44:23 crc kubenswrapper[4880]: I1201 05:44:23.200516 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e21e858-f191-4603-b0d4-d67f8f1408cd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 05:44:23 crc kubenswrapper[4880]: I1201 05:44:23.201660 4880 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e21e858-f191-4603-b0d4-d67f8f1408cd\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 05:44:23 crc kubenswrapper[4880]: I1201 05:44:23.240939 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xfq5\" (UniqueName: \"kubernetes.io/projected/5e21e858-f191-4603-b0d4-d67f8f1408cd-kube-api-access-6xfq5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e21e858-f191-4603-b0d4-d67f8f1408cd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 05:44:23 crc kubenswrapper[4880]: I1201 05:44:23.266062 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e21e858-f191-4603-b0d4-d67f8f1408cd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 05:44:23 crc kubenswrapper[4880]: I1201 05:44:23.286592 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 05:44:23 crc kubenswrapper[4880]: I1201 05:44:23.831224 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 05:44:24 crc kubenswrapper[4880]: I1201 05:44:24.485146 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5e21e858-f191-4603-b0d4-d67f8f1408cd","Type":"ContainerStarted","Data":"21c37a078917f6d90dabc47281ab6ff1a1d698f72f88483fd9e50407ffaf0263"} Dec 01 05:44:25 crc kubenswrapper[4880]: I1201 05:44:25.498092 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5e21e858-f191-4603-b0d4-d67f8f1408cd","Type":"ContainerStarted","Data":"7b0c5aca44a4ad3d1f8af451a7f0597e1af79e655a57baf5627307909fa48d41"} Dec 01 05:44:25 crc kubenswrapper[4880]: I1201 05:44:25.513697 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.521225313 podStartE2EDuration="3.513679267s" podCreationTimestamp="2025-12-01 05:44:22 +0000 UTC" firstStartedPulling="2025-12-01 05:44:23.843282091 +0000 UTC m=+10093.354536463" lastFinishedPulling="2025-12-01 05:44:24.835736035 +0000 UTC m=+10094.346990417" observedRunningTime="2025-12-01 05:44:25.511854632 +0000 UTC m=+10095.023109044" watchObservedRunningTime="2025-12-01 05:44:25.513679267 +0000 UTC m=+10095.024933639" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.255002 4880 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs"] Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.256978 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.263956 4880 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.264197 4880 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.295952 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs"] Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.346780 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05f1e1b2-36f7-44c1-b2ae-290849be2943-secret-volume\") pod \"collect-profiles-29409465-v6vhs\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.347152 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05f1e1b2-36f7-44c1-b2ae-290849be2943-config-volume\") pod \"collect-profiles-29409465-v6vhs\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.347179 4880 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7lzk\" (UniqueName: \"kubernetes.io/projected/05f1e1b2-36f7-44c1-b2ae-290849be2943-kube-api-access-g7lzk\") pod \"collect-profiles-29409465-v6vhs\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.449778 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05f1e1b2-36f7-44c1-b2ae-290849be2943-secret-volume\") pod \"collect-profiles-29409465-v6vhs\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.450099 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05f1e1b2-36f7-44c1-b2ae-290849be2943-config-volume\") pod \"collect-profiles-29409465-v6vhs\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.450231 4880 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7lzk\" (UniqueName: \"kubernetes.io/projected/05f1e1b2-36f7-44c1-b2ae-290849be2943-kube-api-access-g7lzk\") pod \"collect-profiles-29409465-v6vhs\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.451268 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05f1e1b2-36f7-44c1-b2ae-290849be2943-config-volume\") pod \"collect-profiles-29409465-v6vhs\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.458478 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05f1e1b2-36f7-44c1-b2ae-290849be2943-secret-volume\") pod \"collect-profiles-29409465-v6vhs\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.467511 4880 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7lzk\" (UniqueName: \"kubernetes.io/projected/05f1e1b2-36f7-44c1-b2ae-290849be2943-kube-api-access-g7lzk\") pod \"collect-profiles-29409465-v6vhs\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:00 crc kubenswrapper[4880]: I1201 05:45:00.583708 4880 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:01 crc kubenswrapper[4880]: I1201 05:45:01.102960 4880 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs"] Dec 01 05:45:01 crc kubenswrapper[4880]: I1201 05:45:01.935915 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" event={"ID":"05f1e1b2-36f7-44c1-b2ae-290849be2943","Type":"ContainerStarted","Data":"e103753c291cc19ce4e5c39bb243d254298d99fa6d0abab2a6110007259ca81e"} Dec 01 05:45:01 crc kubenswrapper[4880]: I1201 05:45:01.936211 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" event={"ID":"05f1e1b2-36f7-44c1-b2ae-290849be2943","Type":"ContainerStarted","Data":"7845d3ec9751e3b65d6272929958dcef59da9d960bf63a75a0da1c66d09a0138"} Dec 01 05:45:01 crc kubenswrapper[4880]: I1201 05:45:01.957121 4880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" podStartSLOduration=1.957100934 podStartE2EDuration="1.957100934s" podCreationTimestamp="2025-12-01 05:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 05:45:01.952131152 +0000 UTC m=+10131.463385534" watchObservedRunningTime="2025-12-01 05:45:01.957100934 +0000 UTC m=+10131.468355316" Dec 01 05:45:02 crc kubenswrapper[4880]: I1201 05:45:02.953194 4880 generic.go:334] "Generic (PLEG): container finished" podID="05f1e1b2-36f7-44c1-b2ae-290849be2943" containerID="e103753c291cc19ce4e5c39bb243d254298d99fa6d0abab2a6110007259ca81e" exitCode=0 Dec 01 05:45:02 crc kubenswrapper[4880]: I1201 05:45:02.953309 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" event={"ID":"05f1e1b2-36f7-44c1-b2ae-290849be2943","Type":"ContainerDied","Data":"e103753c291cc19ce4e5c39bb243d254298d99fa6d0abab2a6110007259ca81e"} Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.388482 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.527681 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05f1e1b2-36f7-44c1-b2ae-290849be2943-config-volume\") pod \"05f1e1b2-36f7-44c1-b2ae-290849be2943\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.528076 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05f1e1b2-36f7-44c1-b2ae-290849be2943-secret-volume\") pod \"05f1e1b2-36f7-44c1-b2ae-290849be2943\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.528117 4880 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7lzk\" (UniqueName: \"kubernetes.io/projected/05f1e1b2-36f7-44c1-b2ae-290849be2943-kube-api-access-g7lzk\") pod \"05f1e1b2-36f7-44c1-b2ae-290849be2943\" (UID: \"05f1e1b2-36f7-44c1-b2ae-290849be2943\") " Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.528953 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f1e1b2-36f7-44c1-b2ae-290849be2943-config-volume" (OuterVolumeSpecName: "config-volume") pod "05f1e1b2-36f7-44c1-b2ae-290849be2943" (UID: "05f1e1b2-36f7-44c1-b2ae-290849be2943"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.536786 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f1e1b2-36f7-44c1-b2ae-290849be2943-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "05f1e1b2-36f7-44c1-b2ae-290849be2943" (UID: "05f1e1b2-36f7-44c1-b2ae-290849be2943"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.540485 4880 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f1e1b2-36f7-44c1-b2ae-290849be2943-kube-api-access-g7lzk" (OuterVolumeSpecName: "kube-api-access-g7lzk") pod "05f1e1b2-36f7-44c1-b2ae-290849be2943" (UID: "05f1e1b2-36f7-44c1-b2ae-290849be2943"). InnerVolumeSpecName "kube-api-access-g7lzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.630597 4880 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05f1e1b2-36f7-44c1-b2ae-290849be2943-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.630649 4880 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7lzk\" (UniqueName: \"kubernetes.io/projected/05f1e1b2-36f7-44c1-b2ae-290849be2943-kube-api-access-g7lzk\") on node \"crc\" DevicePath \"\"" Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.630669 4880 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05f1e1b2-36f7-44c1-b2ae-290849be2943-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.989990 4880 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" event={"ID":"05f1e1b2-36f7-44c1-b2ae-290849be2943","Type":"ContainerDied","Data":"7845d3ec9751e3b65d6272929958dcef59da9d960bf63a75a0da1c66d09a0138"} Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.990523 4880 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7845d3ec9751e3b65d6272929958dcef59da9d960bf63a75a0da1c66d09a0138" Dec 01 05:45:04 crc kubenswrapper[4880]: I1201 05:45:04.990149 4880 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409465-v6vhs" Dec 01 05:45:05 crc kubenswrapper[4880]: I1201 05:45:05.056960 4880 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4"] Dec 01 05:45:05 crc kubenswrapper[4880]: I1201 05:45:05.063841 4880 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409420-nvdt4"] Dec 01 05:45:06 crc kubenswrapper[4880]: I1201 05:45:06.803136 4880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c77b31c5-c4b9-4604-808e-653a65764a89" path="/var/lib/kubelet/pods/c77b31c5-c4b9-4604-808e-653a65764a89/volumes" Dec 01 05:45:47 crc kubenswrapper[4880]: I1201 05:45:47.513182 4880 scope.go:117] "RemoveContainer" containerID="ea4c7bd59f6db72a7d64fa78c843a7625b49722a60f09590d1eace30f83d8bd2" Dec 01 05:46:47 crc kubenswrapper[4880]: I1201 05:46:47.368891 4880 patch_prober.go:28] interesting pod/machine-config-daemon-g45lh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 05:46:47 crc kubenswrapper[4880]: I1201 05:46:47.369448 4880 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-g45lh" podUID="057ec9cf-8406-4617-bda6-99517f6d2a41" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"